• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Do We Create God?

EgoTripper

New Member
I was here awhile back, left for a bit to focus on law school, then forgot the name and email address of my last account! Ah well. Hello again, atheist-raised-by-Catholics here :)

This is a fairly long-shot brainstorm I had recently. I thought I'd share!

I just watched Waking Life, a trippy (for lack of a better word) movie about a guy wandering through his dreams and having various conversations with people about obscure philosophies and profound questions. I can't recommend it enough for philosophy-minded people.

One of the people he meets is a chemist who has an awesome take on evolution. He points out that we have taken our evolution into our own hands, and as a result, our evolution (defining "our evolution" as not only our biological development, but also our mental and technological development) has accelerated exponentially. Where once natural selection and social adaptation drove human development, now our own intelligence drives our own development.

He suggests that we're only a few short decades away from learning how to enhance our own intelligence, either through genetic research or through artificial intelligence (or some combination of the two). When that happens, he seems to imply that all Hell will break loose almost instantaneously.

Imagine: A scientist makes himself smarter, and thus better able to make himself even SMARTER, and thus EVEN BETTER able to further increase his intelligence, and so on and so on. It's like a "feedback loop" of snowballing intelligence. As the process begins, the speed of compounding intelligence also accelerates into a runaway, explosive chain reaction.

For example: Maybe it begins as a brain pill. The scientist takes it every day for a month. He's smarter during that month, and with that intelligence he invents a better brain pill. This pill makes him even smarter during the next month, and so is able to create an even better pill. This goes on for maybe a year until, say, he figures out how to upload his mind into a computer. Suddenly, the intelligence amplification is happening in computer time -- millionths of a second. Every iteration sees an increase in intelligence that drives the next iteration, and each lasts mere fractions of a second instead of a month. To us outside, we would see an "almost instantaneous realization of human potential;" the capacity of that recursively-improving man/machine would go infinite almost immediately unless something interrupted the cycle or got in its way -- and given that this being's intelligence is improving exponentially, couldn't that intelligence be applied against any such obstacles in its path?

The chemist in Waking Life stopped there. In the process of collecting the brains that had dribbled out of my ears, though, I kept thinking.

As far as I see, there aren't many assumptions that need to be made to conclude that this is a fairly inevitable course of events, and that it will likely happen soon:

Assumption 1) Someone will, at some point in our future, invent a means to increase human intelligence. We've already got drugs that are on the right track, like medication for ADHD. There have been developments in AI that might also lead us to this "breaking point" of the first technological increase in human intelligence.

Assumption 2) The unstoppable nature of the cycle once it's begun. Once our intelligence learns how to improve itself, then it stands to reason that the next step of improved intelligence will be in an even better position to improve itself, and so on and so on. As I said above, this improved intelligence would be used against any obstacles that might stand in its path.

So what are the consequences? If intelligence truly "goes infinite," then virtually every secret in the known Universe would be unlocked. Time and space, constructs we know to be manipulable in theory, would become lego blocks to this "neo-human." The only thing it wouldn't want to mess with would be its own genesis (i.e. if it wanted to play around with time, it probably wouldn't want to kill the father of the man who invented that first brain pill).

So. Omnipotent, omniscient and possessing a vested interest in ensuring that the events leading up to its own creation go according to plan. Sound familiar? Could we actually be on the verge of creating our own God?
 

evearael

Well-Known Member
Well, it takes more than just intelligence to survive... an ever increasing IQ does not guarantee survival. Also, there are more concerns in life than just increasing your intelligence. If 100% of our brain power was devoted to increasing our brain power, I'd say we lost a big part of what made us human to begin with. Furthermore, just because our intellect is rapidly increasing doesn't necessarily mean that it can outweigh the power of our instincts and subconscious in decision making.
 

Sunstone

De Diablo Del Fora
Premium Member
I think humans have several kinds of intelligence. Perhaps as many as eight or nine different kinds of intelligence. These are developed unevenly in each of us. One person might excell at musical intelligence, but stumble at mathematical intelligence, and so forth. None of us are bright in all ways it is possible for humans to be bright. So, whenever someone talks about enhancing "human intelligence", I wonder just which intelligences they propose to enhance?
 

EgoTripper

New Member
evearael, Sunstone:

I think both of your points can be answered by better defining "intelligence." I would define the "type" of intelligence I'm talking about as the quantified difference between "the average joe" and a brilliant neurobiologist or neurotechnologist. Whatever that quality is, this process would be initiated the moment we identify it and discover how to enhance it. The only requirement is recursion; namely, when we can directly enhance our ability to directly enhance our ability to directly enhance our ability to... etc, etc. You can see how the explosive upswing would proceed.

Besides, after a few cycles have been completed, it's perfectly reasonable to assume that this super-intelligent being would have the capacity to branch out, becoming better able to identify and modify all other components of the human brain: Instinct, creativity, etc. This may all happen in a few seconds of normal time if the brain has wired itself into a computer and entered the millisecond-realm of computer time. After all, the brain is a system. As a system, it can -- and will -- be better understood over time. This "critical point" (I've heard it referred to as a Singularity) occurs when we understand the brain sufficiently to enhance its ability to understand itself. It's also reasonable to assume this enhanced intelligence would be more than capable of tackling any obstacles in its path, including those inherent to the human condition, such as survival.

Regarding some of evearael's other points: This process does not guarantee that we will remain human, quite the opposite. Many people are terrified of this event, especially if it comes about as a result of AI, and the creation of something specifically non-human. In fact "The Matrix" and "Terminator" are often described as doomsday scenarios of the Singularity event, though illogical in the sense that these "greater intelligences" are not capable of creating intelligence themselves. I'm a little more optimistic about it myself :) I think that the human qualities (truth, justice, etc.) that those sinister machines dispensed with so quickly have intrinsic, logical value that a super-intelligent being would recognize and feel bound by out of its own self-interest. Counterintuitively, a being of infinite intelligence may also be infinitely predictable.

This is in line with the qualities most religions ascribe to their Gods, too.

Thanks for the welcome! :D
 

evearael

Well-Known Member
Using intelligence to survive is all fine and dandy, if the person has the time and resources to bring their ideas to light. Also, under your proposed system, what guarantees exist that the enhancements would be available universally? Human's have an innate will to power that manifests in all human interactions, if you follow Foucault's reasoning, and the power that would be unleashed by such a dramatic expansion of human intelligence would be quite staggering. Furthermore, even supposing that altruism, if you believe in that sort of thing, manifests in the creation, production and distribution of such an enhancement, the resources won't necessarily be available to convert all of humanity and there may be an entire portion of the society who chooses to remain without augmentation. Also, consider whether a person in a war-torn developing country wants first, food, shelter and peace, or an intellectual boost. If you stop to solve the issues of the developing world first, then you only perpetuate the gap between the emerging classes. Thus, just about every way you look at it, a chasm between the haves and have nots will widen as exponentially as the modifications arise.
 

Mister Emu

Emu Extraordinaire
Staff member
Premium Member
Hmmm... if it could happen in the future, what says it didn't happen before?
 

Sunstone

De Diablo Del Fora
Premium Member
Actually, I'd be more excited about a technology that increased the human capacity to treat others with decency than I would about a technology that increased some (other) aspect of human intelligence. After all, "intelligence" by itself seems just as likely to invent a more efficient gas chamber as it is to invent a cure for cancer.
 

EgoTripper

New Member
evearael, Sunstone:

I think eveareal's response can be divided into two main components. The first questions whether or not this event would be a good thing, and Sunstone's post is in the same vein. I'm not attempting to justify this as moral or immoral, as good or bad. I'm suggesting that it is inevitable, whether we like it or not, and will begin the moment the first human being makes a technological leap that allows him to fractionally enhance his own ability to enhance himself. Once this human "goes infinite," he or she can no longer be properly called "human." From the perspective of the rest of humanity, the entity will be one of three kinds: Malevolent, indifferent or benevolent.

A malevolent entity may destroy humanity for some inscrutable reason, maybe for the resources it needs to manifest its goals (i.e. the machines in the Matrix)

An indifferent entity may depart Earth entirely or otherwise take a 'hands off' approach, much as we might broaden our stride to avoid an anthill. The rest of humanity may not even know this momentous event happened (the ex-human might simply go missing). This may be what you mean when you discuss the widening gulf between the "haves" and the "have-nots."

A benevolent entity may bring the rest of humanity with it, or do something else that is in humanity's interests.

I lean towards benevolence only because I believe that, all things being equal, the vast majority of humanity would choose benevolence. Imagine: You're suddenly given infinite power. What are you going to do, destroy the world "just because" or snap your fingers and do what's best for humanity, given that it takes virtually no effort to do so?

Your second point describes a number of obstacles that might hold this being back, like scarcity of resources and the like. But think about what the Singularity entails; the event would be almost instantaneous from our perspective.

That's the point you may be overlooking. There would be no time for disaster, or strife, or for people to become have-nots. This entire event, this upswing in growth, would happen almost instantaneously. On an evolutionary timescale, we're already in the middle of this insane, explosive upswing. By looking back, in the words of the Waking Life chemist, "maybe then you'll see the powerful telescoping effect of the evolutionary paradigm." It took millions of years for us to figure out tools and fire. It took another few hundred thousand to use those tools to build a simple structure. Development of language and expression is measured in the tens of thousands of years. The development of cities is measured in thousands. The Rennaissance and the Industrial Revolution were mere centuries apart. Then consider the last century. It took just over fifty years to move from jumping off hills in wooden wings to landing on the moon. Really think about the speed of our development. A cave man might feel more at home in 1600 AD than someone from 1900 AD would today. The time between advances is rapidly narrowing; some people smarter than I are actually getting scared. The event I'm talking about, which many scientists believe will happen in our lifetimes, is the moment when this time shrinks to hours, then seconds, then milliseconds.

So, what about the obstacles you mentioned? Envision a being who has passed through at least the first few cycles of this process, before any of the obstacles you've raised present themselves. This scientist began with pills, then spent some grant money and used his or her higher intelligence to invent some cybernetic implants. Already departing from our definition of "human," this being has become so intelligent that the very act of your envisioning it is analogous to an ant envisioning you. It has married man and machine in the secret lab of some well-financed government initiative. Then, one day, imagine that it uses its inconceivable (yet still finite) intelligence to upload itself into a quantum computer that moves a billion times faster than a biological brain... and experiences a heartbeat's worth of progress.

To understand what would happen in that heartbeat, look what we have accomplished in just 200 years. We've come from bloodletting and ether to nuclear power, neuroscience and mapping the human genome. We've figured out some of the most arcane secrets of time and space in a single century. Instead of giving that being a further 200 years of progress, that heartbeat of real time gives it a further thirty million simulated years of progress in computer time. In that time, it's further refining itself, expanding its own capabilities every infinitesimal fraction of a second.

Then give it another thirty million, and another, and another, and another. Two seconds have gone by, and this is assuming that in 150 million years of simulated development within a computer, it's never figured out how to move faster. In reality, the growth may occur so quickly as to be asymptotic to time; if that's the truth, then the intelligence of this entity really would "go infinite."

Sure, obstacles would present themselves. I'm sure "going infinite" requires momentous resources and energy. But this "neo-human" would be applying its inconceivable brilliance to these problems, too. We're on the cusp of understanding cold fusion; that alone should give this entity all the power it needs, at least for the first few iterations. Eventually, it would figure out the secrets of quantum energy fluctuations. Maybe it divides time by zero and generates a localized black hole. Or maybe it just eats the Sun :) Soon, the entity would bend time and space in ways we haven't even begun to imagine. But not only could this being harness any energy source with perfect precision, it could also increase the efficiency of any system or procedure so less energy was needed in the first place. To a being approaching infinite intelligence, there would be no such thing as scarcity of resources. Hell, it could probably spark a contained Big Bang and mine the splinter Universe for whatever resources it needed. When the line between man and God blurs, it's difficult to say what's not possible.

As Mister Emu said, maybe this has already happened. Maybe we're living there, and maybe we're closer to God than we know.

Maybe this is the master plan, what life has been leading to, the ultimate conclusion of life on Earth.

Wow. I really love this idea, even for its potentially frightening consequences.
 

evearael

Well-Known Member
God is perfect, man is not and in that way the line will never be blurred. A far as the benevolent, indifferent, malevolent side of your arguement goes... to be completely any of those qualities is the equivalent of taking away our free will, thus making us further from God than we are now.
 

EgoTripper

New Member
evearael said:
God is perfect, man is not and in that way the line will never be blurred.

You can't know that. A being of infinite intelligence would be perfect by definition. We would agree if your only point was that this being would no longer be called "man," but I think you're going further. I think you're saying that man could never attain infinite intelligence, and thus the path to such intelligence that I've described must be flawed in some way. What is the flaw?

evearael said:
A far as the benevolent, indifferent, malevolent side of your arguement goes... to be completely any of those qualities is the equivalent of taking away our free will, thus making us further from God than we are now

Isn't God completely benevolent?
 

evearael

Well-Known Member
Consider the conservation of energy... it is neither created nor destroyed, only transmuted between forms. Likewise, this ever increasing intelligence won't come out of thin air. You refer to pills (I'm a licensed pharmacy technician, so I've seen the abuses, victories and shortcomings of pills), so let's run with that. First off, if you are changing how the brain functions there is a very high certainty that the modification would affect more than just the raw intellect. Pills are not terribly specific. The run through the entire body just to help a part of it. That is one reason why there are so many side effects to medication. There will always be a cost... thus the referance to conservation. Whether or not it is worth it is another question. Next, you have to consider more than the variety of intellectual expression, and look at how much of it can be effectively harnessed. The net increase in intellect would not be applied in full to the task at hand. It would be divided amongst all the regular demands from marital happiness to eating lunch... that is unavoidable. Thus, the progression would be much slower if it got started at all, and the cost would increase with each progression until the natural breaking point: where the benefits from increase are less than the costs it would incur.
 

EgoTripper

New Member
First things first: I'm not interested in discussing the moral or ethical issues surrounding this. It's not that I don't think them important, but that I know they're way beyond the scope of a single thread and I don't want to be sidetracked. Feel free to make any points you wish, but I'll filter for relevant content.

eveareal said:
the cost would increase with each progression until the natural breaking point: where the benefits from increase are less than the costs it would incur.

I agree with you regarding the potential of this obstacle; though I don't think it's inevitable. The key to understanding the reasonable likelihood of "going infinite" is in the truly recursive nature of this process. The "breaking point" you describe occurs when the equation governing the process balances (i.e. benefits gained = increase in cost). With this process, however, the benefits are constantly redefining the equation, constantly reducing the costs.

Here's another way of thinking about it: Any "breaking point" could simply be defined as an obstacle standing in the path of our superbeing. Yet with every iteration, that being is more and more able to come up with ways around any given obstacles (just look at the history of our race for evidence of that, as this is merely an extension of that process).

eveareal said:
Consider the conservation of energy... it is neither created nor destroyed, only transmuted between forms.

Intelligence is not a form of energy, and conservation laws don't apply to everything. I would relate intelligence to a programming algorithm before I would equate it to energy. Refining a programming algorithm (like a search program or graphing function) can result in an increase of efficiency and output that FAR exceeds the energy/time/matter input.
 

EgoTripper

New Member
In short:

1) Theory proposes inevitable runaway chain-reaction whereby human intelligence "goes infinite" in the not-too-distant future.

2) Possessing infinite intelligence, resulting entity also possesses infinite capability and can transcend time and space.

3) Prediction: The entity has the power to transcend space and time, and is dependent on our development for its creation down the road. With infinite intelligence, it is likely capable of manipulating events to bring about its genesis earlier, or with greater efficiency, "improving" itself by directly editing the events that led to its creation.

With the three propositions above, we can describe characteristics of this entity:

1) Omniscience: It has infinite* intelligence
2) Omnipotence: Infinite intelligence leads to infinite capability.
3) Omnipresence: Infinite capability leads to transcending time and space.
4) Master Plan: It has the motive and the means to take a direct interest in the development of humanity, given that its creation occurs at the apex of that development.

Concludes with a smugly open-ended question: "Sound familiar?"

The more I work with the above, the more elegant it appears in terms of addressing many of the concerns facing religions. Why is there evil? Why doesn't God come down and meet us? Why does God allow multiple religions to describe him/her/them differently? Why is God so interested in Earth, out of all the vast cosmos? I'm not saying these questions are fatal to religion, but they do have much easier answers under this ... well, I wouldn't call it a theory... maybe "an intriguing what if?" :)

* It should be stated that "infinite" doesn't have to mean "limitless," but merely "so large to be beyond our comprehension." From an ant's perspective, we have infinite intelligence. Consider whether anything about God truly requires limitless capabilities, or merely capabilities as beyond human comprehension as our capabilities are to bacteria.
 
Top