• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Are things like Chat GPT safe?

Exaltist Ethan

Bridging the Gap Between Believers and Skeptics
This chat just talks, no?
Exactly. I'm not worried about the future of AI. In fact, I've used an AI program called Soundraw to compile instrumentals to record vocals on later. It was fun. The only thing I'm worried about is when the AI gets so good people use it in nefarious ways, to make and claim that famous people said things they didn't. But that's not the problem of AI, but how people use it. I'm more afraid of people than AI, it seems.
 

robocop (actually)

Well-Known Member
Premium Member
Exactly. I'm not worried about the future of AI. In fact, I've used an AI program called Soundraw to compile instrumentals to record vocals on later. It was fun. The only thing I'm worried about is when the AI gets so good people use it in nefarious ways, to make and claim that famous people said things they didn't. But that's not the problem of AI, but how people use it. I'm more afraid of people than AI, it seems.
Interesting.
 

Viker

Your beloved eccentric Auntie Cristal
I don't see the danger of AI. Like any technology it could be misused. It's not any more threatening than an old telephone.


For those that have feared new technology...
il_794xN.1504610431_122h.jpg

...meet your worst nightmare!
 

Brickjectivity

Veteran Member
Staff member
Premium Member
Version 4 is coming out, soon. We've just been introduced to version 3.

Some things to know about it:
It does not experience pain, fear, love, anger. It may have an equivalent of frustration during training; however users online will not be able to make it have this experience -- the exception being if it is able to learn real time. So you cannot get it into a fighting mode. You can ask it to simulate points of view. You cannot intimidate it, and if you think you are intimidating it you are being fooled.

The ability of neurons to represent information as connections between neurons is related to statistics. You can substitute statistical models for some neural networks, and in many ways they are the same thing. A neural network acts like a statistical model that has many variables and much data.

Ideally neural networks like Chat GPT can open the way to faster legal processes and cheaper legal council. I think everybody will benefit if that happens. It may not be utopian, however. It will be more like vending machine legal advice.
 

Ella S.

Well-Known Member
All Chat GPT does is print text. It's only dangerous in the ways that text can be.

For instance, it can write okayish simple malware sometimes. It has also slipped into advocating for some harmful concepts and using offensive language. It got that Seinfeld parody banned from Twitch.

I'd say that it's basically safe. Most of the concerns about Chat GPT have been recycled from concerns about CleverBot.

If you're concerned about Chat GPT turning into HAL 9000, aside from reminding you that HAL was mostly dangerous due to being hooked up to all of the integrated systems on the station, I would also point out that Chat GPT is not really that great at keeping a line of conversation coherent. It's definitely a long way off from tactically manipulating people to pursue some long-term goal.

We do have AIs designed to do that, but they're not normally neural nets because those tasks are too complex for neural nets to stumble into correctly without intentionally designing the AI around doing that. Usually, that sort of AI is used in applications like chess matches.

If you want to be afraid of an AI, then you should be concerned about the fact that many social media platforms use evolving algorithms that the developers themselves don't fully understand, all with the express purpose of making more money for the site. Those AIs have already been caught doing a lot of questionable if not insidious things just because they are essentially programmed to be greedy and abusive, without any real care about the morality of the content they recommend. TikTok and YouTube are two of the better examples of this.

So Chat GPT isn't where you should be looking if you're concerned about the rise of AI. It's a relatively harmless application.
 

TagliatelliMonster

Veteran Member
Comparing it to a telephone is beautiful. It is a very very very fancy telephone.
It's not a valid comparison.

A telephone, especially such an old one, is a very analogue device which only does what you command it to do.
AI engines actually learn and get better at what they do. And when AI engines are actually put in charge of stuff like machinery, they will make decision and hand out orders to said machinery.


No, an AI chat bot is not going to do that. A chat bot just talks and responds to messages.
But an AI server in charge of, say, a fully automated factory does much more then that.


In theory, you could build an army of unmanned fully automated drones / robots and have a central AI take command of said army and let it act by itself. You could even give it access to all nuclear weapon systems and hand out orders to launch also.
A real-life Skynet if you will.

Luckily, nobody is doing that :)

However, in the future, I'm very certain that at some point there will be fully automated factories which will be managed by AI engines.
These will keep track of production and supply lines, automatically restock parts, diagnose problems, carry out maintenance tasks,.. all by itself without any human involvement. At best, it will automatically send tickets to real people when it encounters something it can not fix itself.

Add in self-driving / flying transportation and even shipment (both incoming for parts and outgoing for finished products) could happen without any human involvement.

Or warehousing. Take Amazon. Technically, even today, with already existing technology, you COULD run such a warehouse fully automated. Picking, labeling, storing, loading, unloading,... all could be done by robots / drones / machines. Add in self-driving / flying cars/drones/what-have-you and you don't need a single human.

The AI handles your web order and a drone delivers the goods a couple hours later.

Should we "fear" such AI engines? Maybe economically.
As long as nobody gives AI control over things it shouldn't ethically have control over (like drones with weaponry or alike), there is nothing to physically fear imo.


But knowing humans......... it's just a matter of time before we have full squadrons of armoured drones armed to the teeth with a server farm as the "general".
 

robocop (actually)

Well-Known Member
Premium Member
It's not a valid comparison.

A telephone, especially such an old one, is a very analogue device which only does what you command it to do.
AI engines actually learn and get better at what they do. And when AI engines are actually put in charge of stuff like machinery, they will make decision and hand out orders to said machinery.


No, an AI chat bot is not going to do that. A chat bot just talks and responds to messages.
But an AI server in charge of, say, a fully automated factory does much more then that.


In theory, you could build an army of unmanned fully automated drones / robots and have a central AI take command of said army and let it act by itself. You could even give it access to all nuclear weapon systems and hand out orders to launch also.
A real-life Skynet if you will.

Luckily, nobody is doing that :)

However, in the future, I'm very certain that at some point there will be fully automated factories which will be managed by AI engines.
These will keep track of production and supply lines, automatically restock parts, diagnose problems, carry out maintenance tasks,.. all by itself without any human involvement. At best, it will automatically send tickets to real people when it encounters something it can not fix itself.

Add in self-driving / flying transportation and even shipment (both incoming for parts and outgoing for finished products) could happen without any human involvement.

Or warehousing. Take Amazon. Technically, even today, with already existing technology, you COULD run such a warehouse fully automated. Picking, labeling, storing, loading, unloading,... all could be done by robots / drones / machines. Add in self-driving / flying cars/drones/what-have-you and you don't need a single human.

The AI handles your web order and a drone delivers the goods a couple hours later.

Should we "fear" such AI engines? Maybe economically.
As long as nobody gives AI control over things it shouldn't ethically have control over (like drones with weaponry or alike), there is nothing to physically fear imo.


But knowing humans......... it's just a matter of time before we have full squadrons of armoured drones armed to the teeth with a server farm as the "general".
I'm sorry that you had to type all that and I still think it's a telephone.
 
Top