• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

DMT the soul molecule

Leonardo

Active Member
But long term memory itself never "improves", nor does it "adapt". It is a classification of memory types, not the memories themselves, nor the processes through which these are encoded, stored, and accessed.

Ah no long term memory can improve when its precedural memory, why someone can improve at chess or accounting or any other precedural skill.

Finally, it is again a contradiction in terms to say that one can be "self-aware" without a "self", because "self-aware" means "to be cognizant of one's self".

I never said one can be self-aware without a self.:sarcastic And to the point; Molaison wasn't cognizant of himself.
 

LegionOnomaMoi

Veteran Member
Premium Member
Ah no long term memory can improve when its precedural memory, why someone can improve at chess or accounting or any other precedural skill.

Think of procedural memory like "muscle memory". It's the name we give "memories" which describe our ability to carry out procedures without paying (conscious) attention, such as driving a familiar route while thinking about the song on the radio, or knowing how to ride a bike. Chess involves understanding concepts like which pieces can move in what ways, the importance of particular positions, pieces, areas of the board, configurations, etc. These all involve semantic memory, not procedural.

To improve abilities, whether playing chess or riding a bike, is to learning. Learning is very much related to memory, but uses some different terminology and talks about memory is different ways, because (for example) the distinction between short term memory and long term memory is often rather useless here. When I am learning to play chess, or learning to improve my game, I am relying on both long and short term memories. However, once again both "short term" and "long term" memory refer to the classifications. If you improve you ability to ride a bike learn accounting, your "long term memory" is neither adapting nor improving.



I never said one can be self-aware without a self.:sarcastic

You asked this:
Is a Anthropocentric view of a human being's self awareness. So the question I ask now: Is there a need for self awarness for there to be a self? :sarcastic

I answered
Self-awareness means a conscious understanding of "self". By definition, you need a "self" for self awareness.

You disagreed with this, and said rather than seeing self in the way I had, we should understand it as
" A goal seeking auto-adaptive system, where the goals are to pursue virtual rewards or avoid virtual punishments."

This can describe anything from a computer program to a plant, or from a single cell to a colony of insects. However, it has nothing to do with being either aware, or having some concept of "self" which one is aware of. And self-awareness requires both by definition.
 

apophenia

Well-Known Member
Originally Posted by Leonardo
" A goal seeking auto-adaptive system, where the goals are to pursue virtual rewards or avoid virtual punishments."
This can describe anything from a computer program to a plant, or from a single cell to a colony of insects. However, it has nothing to do with being either aware, or having some concept of "self" which one is aware of. And self-awareness requires both by definition.

I think Legion has nailed it here, and also made a point which is very relevant in discussions about 'simulations of consciousness' which occasionally occur here on the forums.

I would add that (according to reports ;) ) psychedelic insight is largely about a shift of attention from the 'goal-seeking auto-adaptive system' aspect of mind to absorption in the indefinable and often blissful oceanic self-awareness - about which almost nothing sensible can be said.:)

Certainly the tryptamines go in that direction anyway. The phenethylamines tend towards activation of empathy and insight into the state of one's personal morality/ethics, and how that is all working out for you ...apart from maybe amphetamines, which have proven useful in terms of enhancing goal-oriented behavior of various kinds.

But amphetamines are dangerously more-ish due to the flood of dopamine (which can also result in psychosis and damage to the dopaminergic system), so if I was to suggest a chemical route to improved teleological behavior, it would be to use the aminos l-tyrosine and l-dopa ( and don't forget to take green tea extract and quercitin if you use l-dopa - google it to find out why). This is a legal and safe way to enhance performance of the kind Leonardo is referring to.

I reiterate - trying to use psychedelia to control goal-oriented behavior is a case of 'wrong tool for the job', and will likely result in confusion, mania and obsession.
 

Leonardo

Active Member
These all involve semantic memory, not procedural.

Fron a neural network perspective there is no difference from semantic memory and procedural memory both operate in the very same principles of neural networks.

If you improve you ability to ride a bike learn accounting, your "long term memory" is neither adapting nor improving.

LOL...Sorry but you're wrong. Scans of rat cortical tissues clearly demonstrate that there are changes in long term memory in procederal behaviors.


This can describe anything from a computer program to a plant, or from a single cell to a colony of insects. However, it has nothing to do with being either aware, or having some concept of "self" which one is aware of. And self-awareness requires both by definition.

You're wrong here again. Plants nor single cell organisms are capable of using Virtual rewards or punishments. Your definition of self is un-provable. Whereas my definition of a self is based on behaviors that are based on neurological processes that can be defined as a reward or punishment. We can clearly state if an animal is feeling pain, or is not feeling pain, we can look at the brains of animals and find a limbic system and see firing sequences that respond to threats or rewards, For you to argue otherwise is ludicrous! To the degree of self awareness between a frog and a bird from my definition of self is the degree of complexity of the neurological system. Why a human self and a dog self are not the same nor is a chimpanzee self the same as a dog self.

You have NO SUCH ABILITY to assess these kind of notions with your anthropocentric definitions that are un-provable. At least with my approach I can begin to architect a solution. You’re lost in a cloud asking ridiculous questions like: “How do I know blue is blue” LOL, Maybe not you in particular but many lost in the woods of AI.
 
Last edited:

Leonardo

Active Member
I reiterate - trying to use psychedelia to control goal-oriented behavior is a case of 'wrong tool for the job', and will likely result in confusion, mania and obsession.

And I would argue differently since suggestion can control a psychedelic trip. Having a different mind set changes how one experiences DMT. Change the mind set from religious superstition to computational intelligence and the psychedelic impressions are very different...:D
 
Last edited:

apophenia

Well-Known Member
And I would argue differently since suggestion can control a psychedelic trip. Having a different mind set changes how one experiences DMT. Change the mind set from religious superstition to computational intelligence and the psychedelic impressions are very different...:D

Sure, set and setting are crucial. And certainly the content of imagery may be affected. I also gave an example earlier in the thread of an experience of heightened understanding of computer architecture. Nevertheless, that experience was (a) unusual (b) unplanned and (c) facilitated by hydergine (dihydroxy ergotoxine).

My points remain relevant. Can you give any examples you have heard of which clearly demonstrate the kind of 'control' you are proposing ? Objective evidence please ...

Also - take the hint ! If programmed control of mental capacities is what interests you, try pursuing what has shown real promise - build yourself a float tank, and supplement with hydergine, tyrosine and l-dopa. Trust me, I'm an alien anthropologist who has been observing your species for some time .... that will work for you :D
 

LegionOnomaMoi

Veteran Member
Premium Member
Fron a neural network perspective there is no difference from semantic memory and procedural memory both operate in the very same principles of neural networks.

There are a couple of issues here. The first is the idea of a "neural network perspective" or a set of neural network "principles". Neural network can refer to a term used in three quite different fields all in different ways:
1) Neurophysiology, where the term is used to understand how biological neurons operate, and other psychological sciences which do not tend to work or relate their work to computational neural modelling
2) Computational neuroscience and some work within computational intelligence/A.I., which is concerned with using neural networks to understand how neurons are able to encode, process, and store information

3) A totally different use within computer science and mathematics where neural networks are combined with other soft computing paradigms (swarm intelligence, evolutionary algorithms, etc.) to solve diverse problems.

The first two are related in that they are both concerned with how brains can work. However, the way the use terms relating to learning and memory in completely different ways has resulted in a mess within the literature precisely because they understand "neural networks" from different perspectives. Within the literature on computational modelling of biological neural networks, for example, "long term memory" is contrasted with "short term memory" in terms of synaptic weights. The problem is these "weights" are mathematical, and occur in n-dimensional space (and as physics has four which are relevant for neurons, 100th or 10,000th dimensional space can't really have anything to do with actual neurons). They talk about "learning" and "memory" using far more from the mathematical side of information theory as well as graph theory, network topology, and other methods to construct mathematical models of neural networks which are supposed to "approximations" of what neurons can do, but operate under fundamentally different principles. Thus while this:
Schematically:
legiononomamoi-albums-other-picture3959-adaline-sigmoidal.jpg


The simplest ANNs have just an input layer and an output layer with a defined threshold value. Basically, a single output y ∈ {0,1} is a function (or, iterated function) of n 2-valued inputs of the "neuron", each with a weight w ∈ {-1,1}. The output y is a piecewise summation function of the weighted inputs such that if the result is greater than the threshold, the neuron "fires", and if not, it doesn't.

Let w represent a vector of n weights and x an input vector with n elements. Then we have

y= 1 if w(transpose)x>= threshold value
&
= -1 if w(transpose)x< threshold value.

In reality, we'd have y(t+1) because we're dealing with an iterated function, but the gist is still the same. Schematically:

legiononomamoi-albums-other-picture3961-signum-bipolar.jpg


A "simple" method which vastly increases the power of network schema above is the addition of another threshold function with an adaptive parameter of some sort. Instead of just a simple summation of weights, the linear combination y (the output) becomes part of a larger summation function. This linear combiner not only takes the output as input, but is also a composite function of the input vector and some adaption function. For example:

legiononomamoi-albums-other-picture3962-adaptive-network.jpg

might easily understood by someone who works in computational neuroscience or computer science and soft computing, it may be utterly alien to someone who works as a researcher in the cognitive sciences and psychology (works with fMRI, looks at actual brains, studies memory and conceptual processing in the brain, etc).

The reason for this divide is how incredibly limited our best neural network models are. You may have come across CAPTCHAs before: they are on the internet and require you to type in some work that is drawn in order to show you are a human being. This is because there are people who program "bots" to sweep the internet for data and a large number of sites do not want the programs causing traffic flow problems. CAPTCHAs are successful because it takes increadibly sophisticated use of neural networks (usually in combination with other soft computing technques completely unrelated to the brain) to get these to be able to match any given set of symbols produced "randomly" by CAPTCHAs as letters and/or numbers.

Humans across the globe do this thousands and thousands of times per day.

So those who want to understand how memory works in the brain can do very little with neural networks. In fact, computational neuroscientists have one set of models for understandin individual neurons, and then abandon these when they create neural network models. They do this because the way single neurons are modelled involves a lot of properties which neurons have, but which we either can't use or don't understand how to use in neural network models.

In addition, most of the classification of memory "types" originated before neuroimaging or most of our knowledge about neural networks (computational or biological). They are increasingly abandoned because they lack empirical support.



LOL...Sorry but you're wrong. Scans of rat cortical tissues clearly demonstrate that there are changes in long term memory in procederal behaviors.

Source?

You're wrong here again. Plants nor single cell organisms are capable of using Virtual rewards or punishments.

Both "rewards" and "punishments" are concepts. What dogs can do is associate certain stimuli, like the sound of the word "food" with some sense of time, place, motor programs, the act of eating, some sort of "dog concept" version of food, but not with rewards. They don't understand that certain things are rewards or punishments, but they automatically react to certain things (like food) in particular ways.

A venus fly trap does this. Not on the level a dog can, but it "reacts" to things it "wants" (like food). Ant colonies can do things a dog could not when it comes to problem solving (path minimization and complex coordinated networking to achieve goals). In fact, both venus fly traps and ants are closer to the "neural networks" of computational neuroscientists than our human or dog brains.
 

Leonardo

Active Member
Both "rewards" and "punishments" are concepts.

Well not exactly, you've associated the words reward or punishment with personal experience. After all if you never experienced an emotional reward or punishment or were never taught the assoication to the words with such an experience you wouldn't have the "concept" But actually you're no different than the dog that's been taught to associate a word with an experience. More to the point; rewards re-enforce behaviors and punishements discourage behaviors. There is absolutely no need to understand the concept of a reward or punishment for the effect of the reward (emotional comfort,love, euphoria, or even satiating an itch) or punishment (physical or emotional pain, discomfort, etc) on motivating behaviors. The whole point is the signaling process and the information processing that results in the effect of rewards or punishments. In the end there is NO DIFFERENCE between the signalling of the limbic system of a dog or a human being!:eek:


A venus fly trap does this. Not on the level a dog can, but it "reacts" to things it "wants" (like food). Ant colonies can do things a dog could not when it comes to problem solving (path minimization and complex coordinated networking to achieve goals). In fact, both venus fly traps and ants are closer to the "neural networks" of computational neuroscientists than our human or dog brains.

You see this is where I grit my teeth as to how abusrd an argument like this can be.:thud:..OK, a venus fly trap doesn't WANT ANYTHING! It doesn't have a neurological complex to process information in terms of emotional graitifications, or sensory influences like pain! It has no capacity to DESIRE! But guess what does?..a DOG!:yes:
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Your definition of self is un-provable. Whereas my definition of a self is based on behaviors that are based on neurological processes that can be defined as a reward or punishment.

The use of "rewards" and "punishments" within psychology was part of the behaviourist program, which ended mainly in the 50s and 60s. It is anachronistic. Behaviourism (or behaviorism) treated the "mind" (including concepts) as a black box outside of scientific purview, and tried to understand how human cognitive systems worked by observing behaviors alone. This completely failed, and a combination of work, including that from the "fathers" of modern neural networks, Hodgkin and Huxley, but also Chomsky's Syntactic Structures (and his review of Skinner's model of language), G. A. Miller and even those from Tolman on who worked with actual rats developing "latent learning", all showed that this approach is utterly incapable of dealing even with understanding how rats behave. Something like visual attention, a fundamental process used by all kinds of animals, is related to "memory" but not in any way that lends itself readily to things like "procedural" or "implicit" memory versus "semantic" or "declarative" or whatever.

In order to use "rewards" and "punishments" to understand a system, whether it is a rat, a dog, or a human, which are capable of processing concepts, not just avoiding noxious stimuli and "seek" things like "food", this dichotomy doesn't work. It doesn't work because it uses concepts and applies them to things like dogs which can't understand what the concepts are. They can seek out goals, or avoid things like predators, but so can bee hives and plants (plants can display an impressive range of defense mechanisms, seeking behavior like bending towards sunlight, etc.),

Moreover, you don't "prove" that a definition for something like "self-awareness" is correct. Proof is for mathematics. The reason terms become adopted within the psychological/cognitive sciences, such as "attention", or become increasingly abandonded, such as "long term memory", is because these are models which we use to try to explain how the human mind works. I can define self as the ability to type sentences on a keyboard, in terms of whether or not something has a cortex, and this is a very "provable" (i.e., testable) definition. It's just useless.

Perhaps the central debate within fields concerned with things like learning and memory is to what extent we use areas of the brain and our sensory systems associated with them to construct, encode, process, etc., concepts. Many cognitive scientists belive that all concepts are "embodied" and that even abstractions such as "hope" are fundamentally related to regions of the brain which are known to relate to vision, hearing, movement, touch, etc. These scientists point to the activation of sensorimotor brain regions during experimental studies with functional neuroimaging devices (e.g., fMRI) when subjects recall, or read, or otherwise react to abstract words (written or spoken). They also point to response timing in behavioral studies which demonstrate that humans associate abstract concepts like "hope" with direction (and "despair" with the opposite direction). Language itself is filled with evidence for this theory.

However, it is fundamentally at odds with the standard classification of memory "types", but as most of these have no support anyway, and are largely relics used increasingly only for instructional purposes, nobody cares. They people who apose "embodied cognition" do so because they approach the brain in terms of modules, which is also fundamentally at odds with the classification of memory "types".


We can clearly state if an animal is feeling pain, or is not feeling pain, we can look at the brains of animals and find a limbic system and see firing sequences that respond to threats or rewards, For you to argue otherwise is ludicrous!
Have you ever constructed, run, or analyzed the data from a neuroimaging study? Or even read a neuroimaging study? Or, to put it differently, what is your basis for arguing what we can and cannot show in terms of "firing sequence"? Apart from anything else

1) Nobody knows how the brain uses individual neurons, if at all, and their "firing sequences", to do or understand anything. We do know that most complex behaviors and thoughts have nothing to do with information in "firing sequences" but the correlations and synchronization among multiple neurons. When this happens, an individual neural "firing sequence" has no meaning, and as it happens all the time, tryng to understand much of anything by understanding "firing sequences" is not going to get you anywhere.
2) In order to test whether or not an animal is feeling "pain" or is seeking a "goal" we'd have to formally define what these are and how they can be recognized by activation in particular regions. However, while pain can be measured without involving the brain at all (it's detected by nerves, after all), "goals" are much harder. When my dog hears the word "food", regions related to "goal" and "goal seeking behavior" will activate. But the sound is not the goal. The goal (the actual neural representation of the dog's concept of food) is a complex pattern of neural activity located in different regions in ways which are constantly changing. It is not at all easy to tell what is a goal using neuroimaging.

And finally, even if we could develop some empirically sound method of testing whether or not an animal understood something as a "goal" or "punishment", how does this mean they have a sense of "self"? As I said, I can easily formulate testable definitions of self. But if they are useless to understand either dogs or humans, or anything, then there is no point in doing so. And as I said, everything from plants and cells to swarms and colonies of things like ants or bees can exhibit "goal-directed" or "avoidence" behavior.

To the degree of self awareness between a frog and a bird from my definition of self is the degree of complexity of the neurological system.

Which you cannot test using goals, or even define in the first place in order to test using goals.


You have NO SUCH ABILITY to assess these kind of notions with your anthropocentric definitions that are un-provable. At least with my approach I can begin to architect a solution.

Your ability to use your approach depends on how much what you say about both neurophysiology and our methods to understand neurophysiological properties in terms of behaviors and concepts and so forth. But the ways ways in which you describe how one might implement your approach are at odds with how actual neuroscientists and psychologists use neuroimaging and our current understanding of functional neuroanatomy.
 

LegionOnomaMoi

Veteran Member
Premium Member
Well not exactly, you've associated the words reward or punishment with personal experience.

Let me rephrase: words in some sense correspond (generally) to "things", whether a class of objects ("rock"), some motions ("fall"), or abstract notions ("reward" or "punishment"). The words "reward", "punishment", "goal" are usually called nouns, and the "things" they represent are abstractions. Just as the word "rock" doesn't refer to a particular rock, neither does the word "reward" correspond to any particular "thing" but rather various classes of "things" depending on how we are using the word, who is using the word, etc.

Put another way, I can associate the behavior of a dog who rushes to someone saying "treat", and I can do so for similar behavior for someone saying "food", or to the behavior when the dog sees food lying somewhere, or sees "prey" (e.g., chases birds), and I can call all of these different behaviors "goals". I am using a concept (that of "goals", or perhaps "rewards"), which the dog is not using. Nor can the dog actually understand something as abstract as "goals". They can certainly represent somehow the idea of a particular thing they might want, and do so in ways that birds could not, and birds can do this in ways that plants cannot.

However, as "goals" are (like you said), particular to individual animals, the class of things that a particular animal might "seek" is not understood by that animal the way we can; namely, by classifying them as instantiations of the concept "goal".

More to the point; rewards re-enforce behaviors and punishements discourage behaviors.

If someone points a gun at me and says "give me your wallet", this is probably going to encourage me to engage in that particular behavior (giving them my wallet). Is this a reward?


There is absolutely no need to understand the concept of a reward or punishment for the effect of the reward (emotional comfort,love, euphoria, or even satiating an itch) or punishment (physical or emotional pain, discomfort, etc) on motivating behaviors.

There is. If you can't define the concept in a testable way, then you can't use it in experiments.

And if you can't demonstrate how it is useful or meaningful in terms of some model of "self", then it there is no need to try.

The whole point is the signaling process and the information processing that results in the effect of rewards or punishments. In the end there is NO DIFFERENCE between the signalling of the limbic system of a dog or a human being!:eek:

There is, and more importantly the behaviors and concepts that a dog or a human being might "seek" are not represented by or in the limbic system.



You see this is where I grit my teeth as to how abusrd an argument like this can be.:thud:..OK, a venus fly trap doesn't WANT ANYTHING!

Agreed. Which is why the way you are talking about "rewards" vs. "punishments" corresponds to an abandoned model (behaviorism) which was inadequate for understanding the minds of rats, let alone something as complex as self-awareness. We no longer look at "behaviors", as these cannot inform us about how concepts are processed or understood.

But guess what does?..a DOG!:yes:
How do you know? A dog acts in a particular way in order to obtain a goal. So does a flower, or a cell. We know that the flower and cell have no ability to understand concepts and thus cannot "want" anything (to "want" requires a certain capacity to represent concepts internally). But if we merely formulate "goals" in terms of "goal-seeking behavior", then this definition is applicable to cells and flowers. And if we say it isn't because we wnat to limit it to things which can "want', then we have to define what want is in a way that can be tested and shown to exist for dogs but not for plants.
 

Leonardo

Active Member
Have you ever constructed, run, or analyzed the data from a neuroimaging study? Or even read a neuroimaging study? Or, to put it differently, what is your basis for arguing what we can and cannot show in terms of "firing sequence"? Apart from anything else

1) Nobody knows how the brain uses individual neurons, if at all, and their "firing sequences", to do or understand anything. We do know that most complex behaviors and thoughts have nothing to do with information in "firing sequences" but the correlations and synchronization among multiple neurons. When this happens, an individual neural "firing sequence" has no meaning, and as it happens all the time, tryng to understand much of anything by understanding "firing sequences" is not going to get you anywhere.

And your behind the times, neuromorphic chips are being developed by IBM, Qualcom and Standford University and many others. Even DARPA has contests for the development of neuromorphic chips. You might want to read Dynamical Systems in Neuroscience by Izhikevich, BTW he is heading up the R&D at Qualcom in San Diego for thier Neuromorphic chips.

And finally, even if we could develop some empirically sound method of testing whether or not an animal understood something as a "goal" or "punishment", how does this mean they have a sense of "self"? As I said, I can easily formulate testable definitions of self. But if they are useless to understand either dogs or humans, or anything, then there is no point in doing so. And as I said, everything from plants and cells to swarms and colonies of things like ants or bees can exhibit "goal-directed" or "avoidence" behavior.

The same can be said about human beings you don't know if others have a sense of self.


Which you cannot test using goals, or even define in the first place in order to test using goals.

What? No, what about motivating behaviors didn't you get? In any case you aren't really explaining animal intelligence nor recognizing the similarity of brain antomony between mamals which completely ignores the evolution of neurological systems. If we look at the heart of a monkey and look at the heart of a human being no one can deny the functional similarity of the two. When there is a limbic system in a monkey's brain, a dog's brain, mouse's brain, a sheep's brain, and a human's brain there is no denying that in all mamalian brains the limbic systems is a key source for emotional signalling.:rolleyes:
 
Last edited:

Leonardo

Active Member
How do you know? A dog acts in a particular way in order to obtain a goal. So does a flower, or a cell. We know that the flower and cell have no ability to understand concepts and thus cannot "want" anything (to "want" requires a certain capacity to represent concepts internally). But if we merely formulate "goals" in terms of "goal-seeking behavior", then this definition is applicable to cells and flowers. And if we say it isn't because we wnat to limit it to things which can "want', then we have to define what want is in a way that can be tested and shown to exist for dogs but not for plants.

I'm not goint to answer all the other arguements you've posted becasue this last one says it all...THis is where you fail to get the simple point, what is a CONCEPT? YOU KEEP USING YOUR ANTHROPOCENTRIC POINT OF VIEW. That doesn't mean a dog doesn't have a conceptual framework, its just not the same conceptual framework of a human being!

In the end all your arguments for animals CAN BE SAID OF OTHER HUMAN BEINGS all you can notice is humans act in a particular way, you anthropomorphize that they are self aware...
 

LegionOnomaMoi

Veteran Member
Premium Member
And your behind the times, neuromorphic chips are being developed by IBM, Qualcom and Standford University and many others. Even DARPA has contests for the development of neuromorphic chips.

How does this relate to anything that I said?



You might want to read Dynamical Systems in Neuroscience by Izhikevich, BTW he is heading up the R&D at Qualcom in San Diego for thier Neuromorphic chips.
Please look at the dates of the quotes below to confirm that I referenced the book you refer to before this thread started:


Neurons synchronize. "This chapter, found on the author's Web page (www.izhikevich.com), considers networks of tonically spiking neurons. Like any other kind of physical, chemical, or biological oscillators, such neurons could synchronize and exhibit collective behavior that is not intrinsic to any individual neuron." from the final chapter (available only online) of Dynamical Systems in Neuroscience a volume from the edited series Computational Neuroscience (MIT press).
from Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting by Eugene M. Izhikevich (MIT Press, 2007)

Yet not only are there other types of neurons, none of which have the "threshhold" level which causes the neuron to generate an action potential, we aren't really sure about the mechanisms governing the membrane potential of neurons. For example, that clip you linked to makes it seem as if we know how ions go in and out of a neuron, what the effect is, and so forth. In reality, even when it comes to ions, there is so much we don't know: "Yet while membrane machinery that moves ions has gradually become clear, we are at a loss to explain how the cell knows how many ions it has, let alone how many it needs for normal function" from G. G. Somejen's Ions in the Brain (Oxford University Press, 2004).

We can get into even more specific issues than ions, such as research on the roles and regulations of specific ions. The edited volume Potassium Channels: Methods and Protocols (vol. 491 of the series Methods in Molecular Biology; 2008) is an example of what type of research is ongoing at even this level.

I guess the best way to answer the question about whether we know how a neuron works is to quote a monograph on the subject: "In every small volume of the cortex, thousands of spikes are emitted each millisecond...What is the information contained in such temporal pattern of pulses? What code is used by the neurons to transmit that information? How might other neurons decode the signal?... The above questions point to the problem of neuronal coding, one of the fundamental isssues in neuroscience. At present, a definite answer to these questions is not known." from Gerstner & Kistler's Spiking Neuron Models: Single Neurons, Populations, Plasticity (Cambridge University Press, 2002).

We know that neurons have certain features like all cells in addition to some special ones. We know that they "communicate" via electrical pulses. We don't know how they do this (because we don't even know what it is about neural spike trains that corresponds to a "signal"), we don't know much of what causes the spikes in the first place, and we aren't even sure what the best way to approach modeling neurons is.

You'll notice that I referenced the book you are talking about several times before this thread, and you can in fact look at a number of my posts to see the literature I have discussed concerning how the brain works. I actually have several monographs from that series (Computational Neuroscience, a monograph series which the book you mention is one of).

And if you read the book, you'll notice that while there is a great deal of discussion concerning multidimensional space, topology, phase portraits, bursting, etc., there is almost nothing even related to "memory". If you look at another work Izhikevich recommends in the monograph you refer to, which is also in the same monograph series (Computational Neuroscience, published by MIT Press), called Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems, you will find far more on "neural networks" and memory and other things you have referred to, which are not in Izhikevich's Dynamical Systems in Neuroscience. This is because his book is quite specific in approach: it is not concerned with how the mind, memory, or "self" works, but the applicability and usefulness of a dynamical systems approach, and is intended primarily as an introduction to the mathematical concepts and how they relate to some basic neural processes for graduate students or PhD's who lack a background in nonlinear mathematics and a dynamical systems approach to models, or who have some background but it is not readily extendable to the models of neural systems they are familiar with.


The same can be said about human beings you don't know if others have a sense of self.

I can ask. Humans can communicate using language, which is why we can tell if a human is in a state (e.g., a coma) in which we cannot affirm they have any sense of self. They cannot communicate with us.

What? No, what about motivating behaviors didn't you get?
Why we should use this behaviorist model which was rejected half a century ago.
 
Last edited:

Leonardo

Active Member
Please look at the dates of the quotes below to confirm that I referenced the book you refer to before this thread started:

You may have referenced the literature but your comments hint of futility. If the world depended on your beliefs no one would even attempt a neuro-morphic chip!

Why we should use this behaviorist model which was rejected half a century ago.

No the model is the goal seeking autoadaptive system that is motivated by arbitrated emotional signalling. The problem here is that you don't understand the concept of the model and the proof is in this statement:
If someone points a gun at me and says "give me your wallet", this is probably going to encourage me to engage in that particular behavior (giving them my wallet). Is this a reward?

A clear understanding of my model and common sense could easily see how the person giving up the wallet is avoiding a punishment. Now let's take it slowly; the system bases rewards and punishments on a relative scale, ergo avoiding a punishement is a reward!

I'm done arguing with you Legion, your lack of approach to AI is my issue with your arguements. The world is pressing on and the fact that corporations like IBM, Qualcom and the Military are investing in real solutions to neural emulations and not drowning in the Legion "futility glass of water" means your perspective is long past. Computational intelligence will one day simulate human like intelligence and it will be in our life time...:yes:
 

LegionOnomaMoi

Veteran Member
Premium Member
You may have referenced the literature but your comments hint of futility. If the world depended on your beliefs no one would even attempt a neuro-morphic chip!
Why are they attempting to do this? Decades ago, scientists said that you didn't need any special chip, or special hardware. All that mattered was the algorithm. Then computers hit the world, and A.I. was just around the corner. That was about 60+ years ago. Then, a few decades ago, the utter failure to find any algorithm or combination of algorithms that could do what "minds" do finally made several within the psychological, cognitive, and computer sciences think maybe this wasn't going to work. So they changed their entire approach to computing and to algorithms. Instead of writing codes that found solutions or answers in a specific way, they started looking at how biology, from brains to bees, found solutions. And they started creating artificial neural networks, evolutionary algorithms, gene expression programs, swarm intelligence programs, and so forth. And every few years, some new hardware or new code meant media announcements and press releases claiming that finally real "minds" or "self-awareness" were almost here. Only they never came.

Now there are reports coming out about neuromorphic chips, biocomputers, quantum computers, etc., all of which are accompanied by the same announcements about what we will be able to do which were made 60+ years ago. But after over a half century of these claims, I'll wait until I see it.
 
Last edited:

feelgood

New Member
Where I'm going with this is, if you believe that the brain is creating the alternate reality, a computational reality, then can you control it? Could DMT be a way of creating an emersive virtual reality similar to the movie the "Matrix"?

Speculation: With training it is possible to have at least a minor control over a psychedelic experience. You can compare this kind of control with lucid dreaming. There are methods that can help you to question reality while you are dreaming and to understand that you truly are asleep. At this point you can take control of the dream.

Fact: Dreams can be controlled. I base this on personal experience. I have had 20+ lucid dreams in the past two years. One downside to these tho: sometimes when i'm having a nightmare and i understand that i'm asleep i try to force myself to wake up. Almost every time this causes a state called sleep paralysis and it feels like dying.

Speculation: When under the effect of psychedelic drugs you can't wake yourself up from the nightmare, but you can guide the experience to another direction.
 

Dustin1234

dustin
Just wanted to add in that I agree with the staff member that the original statement for discussion here is oviously about a drug and less about religion, but I wanted to add in that I think DMT is a good topic for other discussions because it becomes relevant when talking about meditation. For example this happened to me:
Last October I herd about DMT, did some research. One night on vacation I realized it was the first time I had been fully rested in a long time, at the time I was getting around 33 hours sleep a week, being fully rested for the first time sence I heard about DMT I decided to test it out as being a powerfull molecule in the brain, all I did was lay in bed with my eyes shut and listen to the vibration of aum. I never took anything, I just lissened openly without expectation. The experience was very intense and best matches the beginning stages of what people who have taken the drug talk about.
 

Dustin1234

dustin
The use of ayahuasca by a number of indigenous traditions of South America, which some do include Chirstian imagery, is said to bring one closer to god and/or understanding oneself. Many claim the experience is literally entering an alternative reality. I've never tried ayahuasca, but I am curious. Most explaining their experience do so from a religous experience. My question is does the perspective of religion influence the experience of DMT? If you come from say a perspective of conciousness is a product of computational intelligence does that change what you experience? Where I'm going with this is, if you believe that the brain is creating the alternate reality, a computational reality, then can you control it? Could DMT be a way of creating an emersive virtual reality similar to the movie the "Matrix"? Under what doeses can external influences actually impose suggestable imaginary? Also how are the effects of the brain under pyshcoldelic influence different than dream states or meditation?

I found this because I wanted see what people had already posted about meditation, so being a couple years old don't know if people still care or not but this is what I've learned.

In my experience anything you believe will effect an out come. POSSIBILITY + ENERGY = MANAFEST Possibity is the same as belief.
DMT is a drug, it also naturally is excreted from the brain during deep sleep, there now have been legal studies in the USA done on it, these can be viewed on netflixs, according to the studies peoples experiences do vary based on their views. This however doesn't disprove that it's a real spiritual experience, even according to Christianity we don't all experience the same thing when we die, in the Mormon religion there four tears to heaven and four tear to hell and some religions add in the void or purgatory, so the experience doesn't have to be the same for all. Also there's no proof, other than what people that take it say, that it is anything more than a trip.

As far as I know meditation effects brain chemistry in the same way that a drug would, this is not to say that mystical experiences are all false. In what I believe, similar to Hindu, there is a chakra system, each chakra relates to a level of the aura, there are three plains of existence physical, astral, and spiritual. Each layer of your aura has a way to interact with the other ones, so if you have spiritual experience it's going to effect your brain chemistry and if your in your physical body when it happens you will experience it though a neuro-pathway somewhere in the body. Just because science can measure it and dismiss it as something we know doesn't mean that that's all there is to it, it could be a “” button to leave this plain and go to another, or it could just be a chemical experience.

As far as the matrix goes, check out M3 Reality or metaphysics 3 reality, based on the holographic model of the universe where the actual laws of reality are flexible like in the matrix. I tested this theory one day after reading and discussing enough about it that I felt I understood it. I got very good results. When I put a theory to the test I lay it all on the line either it works or it's fake, I don't judge the experience only absorb it. During that day I was able to alter so may things instantaneously, people actions, time, the work schedule, what I was assigned to do, everything happed the instant I thought it the way I thought it for eight hours straight, I even found $400 two seconds after wishing I had more money. The experience finally stopped when I gave a bum $100 and he was so ungrateful and rude that my energy leaves dropped to the point that I ended up being sick for the next couple days.

I've tested the bible in this same way, there’s a line in there that says if you pray every night then your prayers will be answered. So in high school I figured either all of my prayers would be answered after praying every night for a while or the bible was made up. After two months of praying every night it worked for the next six months every prayer was answered it stopped when I got tiered of getting everything I asked for I prayed that they would only be answered if I was doing the right thing in life, I use this as a way to gauge my progress.

Anyone that may think I'm crazy don't bother I can back up all of my experiences with other people who have shared in them and know that they are true.
 

Katzpur

Not your average Mormon
This however doesn't disprove that it's a real spiritual experience, even according to Christianity we don't all experience the same thing when we die, in the Mormon religion there four tears to heaven and four tear to hell and some religions add in the void or purgatory, so the experience doesn't have to be the same for all.
Actually Mormonism teaches of a three-tiered heaven (and the Bible specifically mentions the third heaven) and one place referred to as Outer Darkness, which could be equated with Hell (without the fire and brimstone).
 
Top