• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Are Computers Aware?

dust1n

Zindīq
Funny enough... I just got Minds, Brains & Science by Searle in the mail, but I forgotten completely that the book I order was the same author. Did not occur to me.
 

LegionOnomaMoi

Veteran Member
Premium Member
Cool graph showing most people say within 20 years ai is possible.
It doesn't say that. Notice that the graph starts at 2010s. We're three years into that first set and yet we are no closer than we were when that survey was taken, or the 2006 survey before that. Also note that there it isn't just AI, but levels of AI. The Turing test is the easiest, followed by 3rd grade intelligence. This is a bimodal distribution, which means (in this case) that a most are on either very optimistic or very pessimistic. So a bunch of people think that within 7 years we'll have progressed more than we have in the last 50.

That is a rather high number of people that say 100 years to never but where does this stem from.

From endlessly repeated statements like "within 20 years ai is possible" made by top scientists for year after year after year and yet with all the advancements, there is nothing we have accomplished which we couldn't have done with a computer from the 50s. It would just take a lot longer.

Also interesting to see experts say its near but be wrong
Which is what has repeatedly happened and caused later experts (or the same, but later one) to realize how delusional early optimism was.


When IBM or the like give a optimistic future it is prob cause they have something up there sleeves. They give good shows.
It's a great video, and it doesn't even do justice to the atomic and molecular (let alone subatomic) manipulations we're now capable of. However, when the people who are behind all the things that resemble AI, from Netflix recommendations to google ads, say that we're nowhere near anything like computers that understand anything, it's because they know exactly what's being done and most of the time they're the ones doing it.
 

idav

Being
Premium Member
It's a great video, and it doesn't even do justice to the atomic and molecular (let alone subatomic) manipulations we're now capable of. However, when the people who are behind all the things that resemble AI, from Netflix recommendations to google ads, say that we're nowhere near anything like computers that understand anything, it's because they know exactly what's being done and most of the time they're the ones doing it.

IBM is the one giving the optimistic dates and they are one company showing remarkable results.

I'm sure you'll disagree with the projection but here you go.
IBM Projects It Will Have World’s Most Powerful Supercomputer in Two Years, Artificial Human Brain in 10 | TPM Idea Lab

But now, IBM is working with DARPA, the Defense Advanced Research Projects Agency, on a “cognitive computer” that would simulate the same number of neurons as the human brain, upwards of 100 billion, and would run on much less energy than WATSON. They expect to have the feat completed in 10 years time.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
IBM is the one giving the optimistic dates and they are one company showing remarkable results.

That's because they, like everyone else, redefined what AI meant after years of failure. What they call AI might be called "narrow AI" or "weak AI", and they know it:

"To apply the technology underlying WATSON to another domain, such as healthcare or call center support, would require not merely education of the AI system, but significant reprogramming and human scoring of relevant data -- the analogue of needing to perform brain surgery on a human each time they need to confront a new sort of task. As impressive as these and other AI systems are in their restricted roles, they all lack the basic cognitive capabilities and common sense of a typical five year old child, let alone a fully educated adult professional." (source)

How is an AI system that needs the equivalent of brain surgery to do any new task something that is so impressive compared to what we've been doing for decades?

That's being generous. IBM has a journal (IBM Journal of Research and Development), and when I searched for "artificial intelligence" do you know what came up as the most relevant paper in that journal? "Primary production scheduling at steelmaking industries", a study from 1996.

The more technical of an audience, whether at some conference or in a journal for specialists, the less hype. Research labs, including university research labs, advertise. That's how they get funding if they're a non-commercial research lab, and customers (money) if they are a commercial research lab (or a company with lots of labs).

Also, that "cognitive computer" thing? The university of Manchester started it, and DARPA gave IBM and affiliated universities $21 million in 2008. This isn't news.
 
Last edited:

idav

Being
Premium Member
As they keep increasing the number of processes that can be run at the same time, many including IBM are still doing this, we get closer to the amount of processes humans run simultaneously. This is certainly steps in the right direction. The issue is processing speed doesn't address cognition. I think the ram is closer to what conscious awareness is. Random access memory is what is aware of the stored memory necessary for running live in the environment.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
As they keep increasing the number of processes that can be run at the same time, many including IBM are still doing this, we get closer to the amount of processes humans run simultaneously.

We know this isn't true. To illustrate:

Let's say you had a calculator that was far better than any graphing calculator out there. It's so powerful it can take hundreds of trillions of data points (or coordinates in a mind-bogglingly enormous matrix) and run operations on these.

That would be great, but it just means we do a lot of calculations very fast.

Humans simply do not process information the way computers do. We know this. We know this because every single program you've ever run, every application, every webpage, every graphic, every anything you've ever observed a computer do is just using a tiny number of logical operations very fast.

Processing information by itself is utterly meaningless. A cheap calculator processes information. IBM's WATSON can't process information at the level rats can.


This is certainly steps in the right direction.
There is no reason whatsoever to think so, and a great many to think that it isn't



The issue is processing speed doesn't address cognition
The issue is that procedures carried out mindlessly by a plant, a calculator, or a supercomputer are all doing exactly the same thing: mindlessly reacting.


Random access memory is what is aware of the stored memory necessary for running live in the environment.
RAM cannot possibly be aware of anything. It is a configuration of binary states designed in such a way that a processor can carry out manipulations on this configuration according to a specific algorithm.
 

idav

Being
Premium Member
RAM cannot possibly be aware of anything. It is a configuration of binary states designed in such a way that a processor can carry out manipulations on this configuration according to a specific algorithm.
Its aware of the environment and what's running in it. It isn't aware beyond being able to run the environment. So it is being awake instead of asleep.
 

LegionOnomaMoi

Veteran Member
Premium Member
Its aware of the environment and what's running in it. It isn't aware beyond being able to run the environment. So it is being awake instead of asleep.
RAM is the environment, and a procedure manipulates it. It's as "aware" of it's environment as an a type-writer. Given some set of procedures, the states change, whether RAM or typewriter or abacus or a casino card shuffler.
 

idav

Being
Premium Member
RAM is the environment, and a procedure manipulates it. It's as "aware" of it's environment as an a type-writer. Given some set of procedures, the states change, whether RAM or typewriter or abacus or a casino card shuffler.

Something has to do the temporary storing. The operating system uses the memory to be awake. A system is aware when it is awake awaiting a command to respond to. I'm of the notion that given enough hardware power that ai is programmable. I honestly couldn't fathom programming something that does a billion processes a second but sounds good in theory.
 

dust1n

Zindīq
Something has to do the temporary storing. The operating system uses the memory to be awake. A system is aware when it is awake awaiting a command to respond to. I'm of the notion that given enough hardware power that ai is programmable. I honestly couldn't fathom programming something that does a billion processes a second but sounds good in theory.

A system is aware when it is awake awaiting a command to respond to... what is it aware of?

It seems many people share your notion; but until someone actually produces said notion, the notion itself isn't too valuable to others who do not share your notion.
 

apophenia

Well-Known Member
Something has to do the temporary storing. The operating system uses the memory to be awake. A system is aware when it is awake awaiting a command to respond to. I'm of the notion that given enough hardware power that ai is programmable. I honestly couldn't fathom programming something that does a billion processes a second but sounds good in theory.

A core i7 3960X does 177,000 MIPS.

That's 177 billion instructions per second. ( Well, according to a page I found in a quick google search. I'm not sure if the author actually means "instructions per second" or "instruction cycles per second" - or if the author even knows the difference. But even if there are 12 instruction cycles per instruction, like the original x86 series, that's still over 10 billion instructions/second).

No basic statements of personal identity have been made by any laptops yet AFAIK.

"The operating system uses the memory to be awake."
Only according to your very loose definition of awake.

You don't seem to differentiate between an analogy to awareness/awakeness, and actual awareness.
 

Me Myself

Back to my username
How could weossily know if they are? We could put commands in there so it simulates it, would that mean it is conscious?

Well, maybe. Why? Because how do you know if something is conscious anyways?

As a determinist I see little difference as for proof of being aware from a computer program saying it is and a human (that has been programmed by DNA and enviroment anyways) saying she is.

We cant know. Absolutely everything could be aware and we wouldnt know.
 

idav

Being
Premium Member
How could weossily know if they are? We could put commands in there so it simulates it, would that mean it is conscious?

Well, maybe. Why? Because how do you know if something is conscious anyways?

As a determinist I see little difference as for proof of being aware from a computer program saying it is and a human (that has been programmed by DNA and enviroment anyways) saying she is.

We cant know. Absolutely everything could be aware and we wouldnt know.
Awareness isnt about the quality of what is being percieved, as long as something is coming through. Our awareness is the result of billions of micro awarenesses.

Edit: You hit an important question. We couldn't ever really know what is really being perceived even if technology allowed it.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Awareness isnt about the quality of what is being percieved, as long as something is coming through.

Actually according to one of the very few formal models of awareness/consciousness it is (more or less). The idea is that what matters isn't the information (something which was formalized by Shannon in 1948? I'll check later to see how close I was), but how it is integrated by the system. That is, you can flood information through certain systems and have the level of awareness by very low, and have much less information but increased integration and have a higher awareness.

The problem with this is that it tells us a camera is aware. In fact, we can compare different cameras to each other and see how much more aware one is relative to another. To me, this means getting a formal definition at the expense of anything useful. It's a formal definition motivated by the desire for a formal definition, not by whether or not the definition tells us anything or helps us advance in any way whatsoever.

Our awareness is the result of billions of micro awarenesses.
It isn't.

We couldn't ever really know what is really being perceived even if technology allowed it.

Have you ever squeezed the handle of a pump at a gas station? Would you say that the pump is aware of being squeezed and it is this awareness which causes the gasoline to flow into your car?

Let's try something. For the sake of argument (and it's basically true depending on definitions), we'll call anything and everything that can react to some input (whether pushing on a bike pedal or a car gas pedal) and produce some output (with the bike, we get a chain on a gear with spokes to turn the wheels of the bike, while with the car we have lots of small explosions that ultimately turn wheels). So humans, rats, lawyers, and computers are likewise all machines.

With a human or a lawyer, someone hands me a piece of paper with a list of names to call and rather a general description of what I should try to get out of them during the conversation. The paper has instructions, but they are very vague and informal. That's because humans (and lawyers) don't need formalized instructions as they aren't merely procedural machines. They are machines that understand concepts.

Another machine is a typewriter. It's built so that someone interact with it: by e.g., putting in a new sheet of paper and rolling the wheel such that the paper is in a position to be hit by the various letter-stamps activated by keystrokes.

In other words, if I hit a key on the typewriter, it will mechanically carry out a procedure whereby that key corresponds to some character that is implemented in the system as a stamp which, by physical striking a sheet of paper, will leave a mark nearly identical to the character (letter) I wanted.

Most machines do this and only this. They can be incredibly complex and adaptive, whether an ant colony or a venus flytrap, but all they can do is mechanically/reactively carry out procedures. A venus flytrap isn't "aware" that a bug has landed in it's jaws. You can test this by poking it with something else an it will react automatically as if it had captured a bug.


A computer is only this: a machine that can carry-out procedures. It is quite literally a scaled-up calculator.
 

idav

Being
Premium Member
Actually according to one of the very few formal models of awareness/consciousness it is (more or less). The idea is that what matters isn't the information (something which was formalized by Shannon in 1948? I'll check later to see how close I was), but how it is integrated by the system. That is, you can flood information through certain systems and have the level of awareness by very low, and have much less information but increased integration and have a higher awareness.

The problem with this is that it tells us a camera is aware. In fact, we can compare different cameras to each other and see how much more aware one is relative to another. To me, this means getting a formal definition at the expense of anything useful. It's a formal definition motivated by the desire for a formal definition, not by whether or not the definition tells us anything or helps us advance in any way whatsoever.


It isn't.



Have you ever squeezed the handle of a pump at a gas station? Would you say that the pump is aware of being squeezed and it is this awareness which causes the gasoline to flow into your car?

Let's try something. For the sake of argument (and it's basically true depending on definitions), we'll call anything and everything that can react to some input (whether pushing on a bike pedal or a car gas pedal) and produce some output (with the bike, we get a chain on a gear with spokes to turn the wheels of the bike, while with the car we have lots of small explosions that ultimately turn wheels). So humans, rats, lawyers, and computers are likewise all machines.

With a human or a lawyer, someone hands me a piece of paper with a list of names to call and rather a general description of what I should try to get out of them during the conversation. The paper has instructions, but they are very vague and informal. That's because humans (and lawyers) don't need formalized instructions as they aren't merely procedural machines. They are machines that understand concepts.

Another machine is a typewriter. It's built so that someone interact with it: by e.g., putting in a new sheet of paper and rolling the wheel such that the paper is in a position to be hit by the various letter-stamps activated by keystrokes.

In other words, if I hit a key on the typewriter, it will mechanically carry out a procedure whereby that key corresponds to some character that is implemented in the system as a stamp which, by physical striking a sheet of paper, will leave a mark nearly identical to the character (letter) I wanted.

Most machines do this and only this. They can be incredibly complex and adaptive, whether an ant colony or a venus flytrap, but all they can do is mechanically/reactively carry out procedures. A venus flytrap isn't "aware" that a bug has landed in it's jaws. You can test this by poking it with something else an it will react automatically as if it had captured a bug.


A computer is only this: a machine that can carry-out procedures. It is quite literally a scaled-up calculator.

The cameras dont have an operating system, the operations are more reactionary. However given that chemistry also is essentially reactionary then the cause and effect are part of it. So that if a hits b then b knows it in the sense that it reacted. When this reaction is encoded and can be built upon with learning you have a system that is aware of whatever it is designed to be aware of.

When looking at an eye the eye is not aware but more so than the camera since the cells in the eye are aware because each cell is an individual system capable of learning and communication.
 

michel

Administrator Emeritus
Staff member
Here is a quick definition from wiki.

Computers are able to receive input from its environment through keyboards, scans, cams and microphones. The computer is also able to store this data and even send information back in return based on its own resources. Is this enough to satisfy any part of the definition of awareness? Why or why not.

It also seems that consciousness can be a possibility when the computer is actually turned on. It is storing memory into RAM accessing its memory in real-time however this memory gets wiped once the computer shuts down. When the computer comes back up it puts all the memory it can into the RAM which is what the computer is "aware" of in order to run current processes.

The other thing to consider what it actually takes to feel something. Is perceiving enough to feel something?

Mine is - and it is determined to confuse me...
 

dust1n

Zindīq
After relooking at the Wiki for awareness, and reading more fully:

Awareness is the state or ability to perceive, to feel, or to be conscious of events, objects, or sensory patterns. In this level of consciousness, sense data can be confirmed by an observer without necessarily implying understanding.

Man, that second sentences makes all the difference. If this is so, then computers are aware, of basically only the input.

But that awareness does not equate to AI, or to having a brain-like understanding in anyway. It's pretty much meaningless to be aware.
 

idav

Being
Premium Member
After relooking at the Wiki for awareness, and reading more fully:

Awareness is the state or ability to perceive, to feel, or to be conscious of events, objects, or sensory patterns. In this level of consciousness, sense data can be confirmed by an observer without necessarily implying understanding.

Man, that second sentences makes all the difference. If this is so, then computers are aware, of basically only the input.

But that awareness does not equate to AI, or to having a brain-like understanding in anyway. It's pretty much meaningless to be aware.

The implications are that AI would be a conscious being in theory, that is, if we were able to pull it off. Say we were able to pull off AI and the machine talks to us and says the feel and perceive. Are we to believe that the ones and zeroes equate to perceiving something, feeling something? I think so but each type of "perceiving" is different depending on which sense is being utilized. Obviously machine senses would be a different experience, a different way to perceive the world.
 
Top