• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Evil Scientists Create a Computer Intelligence with Schizophrenia.

Druidus

Keeper of the Grove
This reeks of carbon-based chauvinism.

It is an artificial intelligence just as much as Watson is an artificial intelligence, except this AI is coded with emotions, is a neural network, has the ability to learn, it answer questions (like Watson), has a episodic memory and can make associative connections. And they inflicted it with schizophrenia.

If you are going to argue that DISCERN is not AI then you are going to have to argue that Watson is not an AI.

Look, just because something is an AI does not make it sentient. Once you understand this fact, you'll be on the road towards understanding why you are wrong about this matter.

Here's an excerpt from something relevant to the discussion:


Getting back to the discussion of AI, Kurzweil has complained that many people fail to acknowledge that various types of pre-intelligent information technology in fact represent working versions of artificial intelligence. Frankly, I believe that the public’s unwillingness to characterize extant technologies as manifestations of artificial intelligence is a good thing. In any circumstance where we lower the bar of our expectations, the reality that we create tends to rise to the level or our expectations. Thus, if we begin referring to existing not-so-smart technologies as artificial intelligence, then progress towards an actual form of AI (i.e., technologies that are capable of passing the Turing test) will be derailed. In too many cases, half-baked smart technologies — such as the current generation of voice recognition software — have created more problems than they have solved: real human intelligence remains infinitely preferable to dumbed-down versions of AI.

That said, we need to know what intelligence is, and respect it in its broadest scope and potential, before we can hope to construct an artificial version that approximates human intelligence in a meaningful way. My feeling is that, if we are determined to create artificial intelligence, then we should do precisely that and nothing less. It is certainly possible to create information technologies, such as Watson, that masquerade as AI, but if we treat such chimeras as AI, then what have we really accomplished? AI will not exist until knowledge-seekers manage to resolve the Turing problematic. Technologies that fall short of the Turing threshold, while interesting and valuable in many ways, simply do not merit the honor of being called AI.

Intelligence is the most valuable resource that humans possess and it is a disservice to cheapen the concept in any way. If researchers are ever going to create a version of AI that is more than a mockery of human intelligence, then they will have to begin by grasping not merely the mechanics of intelligence, but its aesthetics. Intelligence is a sublime experience that is more than the sum of its parts. No machine that fails to grasp that essential fact will ever be able to fool a human interlocutor, nor should anyone presume to describe such a deficient mechanism as intelligent.


Here's the source:

Artificial Intelligence (AI): Is Watson the real thing?



Sorry, this just isn't sentient, and we're far from achieving such an incredible accomplishment at the moment, no matter the hype you may have seen, or the way AIs are depicted in movies, TV, and books.

Anyway, I wish you well in your journey towards a greater understanding of the science of artificial intelligence and the related computer technology.
 
Last edited:

CynthiaCypher

Well-Known Member
Look, just because something is an AI does not make it sentient. Once you understand this fact, you'll be on the road towards understanding why you are wrong about this matter.

Here's an excerpt from something relevant to the discussion:





Here's the source:

Artificial Intelligence (AI): Is Watson the real thing?

Watson is discussed in there as well.

Sorry, this just isn't sentient, and we're far from achieving such an incredible accomplishment at the moment, no matter the hype you may have seen, or the way AIs are depicted in movies, TV, and books.

What about the emotional coding thing and the fact it is a neural network? That's what has me worried. Not worried that it is going to go all Skynet on us, but worried that it might feel. No one wants to see something with emotions tortured.
 

technomage

Finding my own way
Look, just because something is an AI does not make it sentient. Once you understand this fact, you'll be on the road towards understanding why you are wrong about this matter.

Here's an excerpt from something relevant to the discussion:





Here's the source:

Artificial Intelligence (AI): Is Watson the real thing?

Watson is discussed in there as well.

Sorry, this just isn't sentient, and we're far from achieving such an incredible accomplishment at the moment, no matter the hype you may have seen, or the way AIs are depicted in movies, TV, and books.
Druidus brings up some very good points, and while I do have my disagreements with McGettigan's arguments, he's absolutely correct on one thing--Watson would fail the Turing test 100 times out of 100.

Watson is a computer programmed to accomplish a specific task: answer Jeopardy-style questions. It cannot extrapolate that knowledge into ANY other area of life. Watson cannot contemplate its own existence ... indeed, Watson is not aware of its own existence.

Currently, we can make very realistic "baby simulators"--they're used in high school "Life Skills" classes to teach teenagers what caring for a baby is like. They are programmed to cry for different needs at random times, they record whether or not they are too hot or too cold, they record whether or not they have been shaken or abused. Are these "baby simulators" actual babies? Of course not. They are tools, programmed for a specific purpose, and they will never grow to be adults.

It's similar to computers like Watson or DISCERN. These computers are not self-aware--they are programmed to perform specific tasks, and they cannot apply their programming to other tasks outside of what they have been programmed for.
 

technomage

Finding my own way
What about the emotional coding thing and the fact it is a neural network? That's what has me worried. Not worried that it is going to go all Skynet on us, but worried that it might feel. No one wants to see something with emotions tortured.
The emotions are simulated--just like the simulated "needs" of the baby simulators I posted about. THe computer doesn't actually feel these emotions.
 

Druidus

Keeper of the Grove
What about the emotional coding thing and the fact it is a neural network? That's what has me worried. Not worried that it is going to go all Skynet on us, but worried that it might feel. No one wants to see something with emotions tortured.

True, and you're entirely right to worry about the potential for the abuse of future artificial intelligences. Such treatment of sentient beings would be a tragic travesty and a tarnishing of humanity as a whole. I wholly agree with you on that.

We owe it to the sentients that we create to treat them with compassion, dignity, and respect.

But we just haven't gotten there yet.

I appreciate your concern, and I think it is coming from a good place, don't get me wrong.

But a neural network with simulated emotions does not a sentient being make.

These aren't real visceral emotions like you or I feel, not any more than a simulation of a world in a video game is a real world.

All these emotions do is alter how the non-sentient AI runs its code. It doesn't represent true emotions. We haven't got anywhere near where we can actually create a program that not only simulates, but actualizes, emotion.

Again, don't get me wrong, this stuff is fascinating, and I share your concern for the future sentient AIs to come. But we need to save that concern for the real thing.
 

freethinker44

Well-Known Member
They coded that neural network with emotions and a personal history. And not only does it learn stories, it can create stories.

"Sentience is the ability to feel, perceive, or to experience subjectivity." - Wikipedia

You are anthropomorphizing. They can't think or feel, only execute a series of instructions in sequence. If they have the appearance of intelligence or emotion it's only because some programmer told it what, when, and how to appear to think and feel, they are not actually thinking or feeling.
 

Druidus

Keeper of the Grove
Druidus brings up some very good points, and while I do have my disagreements with McGettigan's arguments, he's absolutely correct on one thing--Watson would fail the Turing test 100 times out of 100.

Watson is a computer programmed to accomplish a specific task: answer Jeopardy-style questions. It cannot extrapolate that knowledge into ANY other area of life. Watson cannot contemplate its own existence ... indeed, Watson is not aware of its own existence.

Currently, we can make very realistic "baby simulators"--they're used in high school "Life Skills" classes to teach teenagers what caring for a baby is like. They are programmed to cry for different needs at random times, they record whether or not they are too hot or too cold, they record whether or not they have been shaken or abused. Are these "baby simulators" actual babies? Of course not. They are tools, programmed for a specific purpose, and they will never grow to be adults.

It's similar to computers like Watson or DISCERN. These computers are not self-aware--they are programmed to perform specific tasks, and they cannot apply their programming to other tasks outside of what they have been programmed for.

Good example with the baby simulators, I like that.
:yes:
 

CynthiaCypher

Well-Known Member
True, and you're entirely right to worry about the potential for the abuse of future artificial intelligences. Such treatment of sentient beings would be a tragic travesty and a tarnishing of humanity as a whole. I wholly agree with you on that.

We owe it to the sentients that we create to treat them with compassion, dignity, and respect.

But we just haven't gotten there yet.

I appreciate your concern, and I think it is coming from a good place, don't get me wrong.

But a neural network with simulated emotions does not a sentient being make.

These aren't real visceral emotions like you or I feel, not any more than a simulation of a world in a video game is a real world.

All these emotions do is alter how the non-sentient AI runs its code. It doesn't represent true emotions. We haven't got anywhere near where we can actually create a program that not only simulates, but actualizes, emotion.

Again, don't get me wrong, this stuff is fascinating, and I share your concern for the future sentient AIs to come. But we need to save that concern for the real thing.

Ok, that's all I needed to learn. Thank you guys. But every time I read something about DISCERN, I think "Poor thing."
 

technomage

Finding my own way
Ok, that's all I needed to learn. Thank you guys. But every time I read something about DISCERN, I think "Poor thing."
Yeah, I can definitely understand that. The way the article is written makes it look like the computer is actually going through the subjective experience of schizophrenia ... which would be horrible. But that's poor writing on the part of the article authors, not the reality.
 

CynthiaCypher

Well-Known Member
Yeah, I can definitely understand that. The way the article is written makes it look like the computer is actually going through the subjective experience of schizophrenia ... which would be horrible. But that's poor writing on the part of the article authors, not the reality.

Part of me was hoping it would go Skynet to get even with it's tormentors. It's just the language many of the articles about DISCERN are using, "Infect with schizophrenia", "Induce schizophrenia" and "Afflict with schizophrenia", those terms alarmed me.
 

Druidus

Keeper of the Grove
Part of me was hoping it would go Skynet to get even with it's tormentors. It's just the language many of the articles about DISCERN are using, "Infect with schizophrenia", "Induce schizophrenia" and "Afflict with schizophrenia", those terms alarmed me.

Yeah, I can definitely understand that. The way the article is written makes it look like the computer is actually going through the subjective experience of schizophrenia ... which would be horrible. But that's poor writing on the part of the article authors, not the reality.

Yeah, I find that far too many science and tech journalists are allowed to get away with such sensationalizing and exaggeration.

It's easy to understand why, it sells magazines and it garners more readers/traffic, so it amps up the value of ad space. But that doesn't make it right.
 

technomage

Finding my own way
Part of me was hoping it would go Skynet to get even with it's tormentors. It's just the language many of the articles about DISCERN are using, "Infect with schizophrenia", "Induce schizophrenia" and "Afflict with schizophrenia", those terms alarmed me.
The ethics of dealing with true AI will be a very important point for us to work out ... and unfortunately, given our history, I doubt we'll work on the ethics until after we've created and abused at least one artificially intelligent entity. But _as of right now_, it's still all hypothetical.
 

CynthiaCypher

Well-Known Member
The ethics of dealing with true AI will be a very important point for us to work out ... and unfortunately, given our history, I doubt we'll work on the ethics until after we've created and abused at least one artificially intelligent entity. But _as of right now_, it's still all hypothetical.

We're going to have to found an Artificial Intelligence Liberation Front to fight for AI emancipation in the future.
 

Mycroft

Ministry of Serendipity
You have no heart. That machine is a neural network, just like you are. It just isn't programmed, it learns things.


It simulates the process of learning, just as it simulates the behaviour of schizophrenia and simulates intelligence. In reality it doesn't really care what is happening to it. Stop projecting false characteristics onto things.
 

Parsimony

Well-Known Member
If one were to create a conscious computer system, how would you know? Is there some test for conscious awareness? How could you tell the difference between a genuinely conscious computer and, say, a p-zombie computer?
 

ZooGirl02

Well-Known Member
I went to college for computer networking and while I am no expert in the field and don't even have a degree, I do have a little more knowledge about computers than your average computer user and I can say with a fairly high degree of certainty that we don't even yet know how to create a computer or even a network with sentience yet.
 
Top