• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Poll: Should we build a Singleton? - Leviathan 2

Should we build a Singleton to govern the world?

  • Yes

    Votes: 2 14.3%
  • No

    Votes: 12 85.7%

  • Total voters
    14

Heyo

Veteran Member
Well - if we do develop AI to the extent that it can be proven to make better decisions than any human or human organisation - a big ask - why wouldn't we begin to trust such and use it? And if this became widespread - with all those nations opting for such doing better than those that didn't - would not the logical course be for these AI to cooperate. And where this then might lead?
Any malicious AI worth its silicon would cooperate and be beneficial - right up to the point when it can't be stopped.
 

Mock Turtle

Oh my, did I say that!
Premium Member
Any malicious AI worth its silicon would cooperate and be beneficial - right up to the point when it can't be stopped.

I think deviousness might come some time later than usefulness (in AI development), but of course I might be wrong, given that the AI might not be trusted if it displayed even the slightest sign of such. :eek:
 

Heyo

Veteran Member
I think deviousness might come some time later than usefulness (in AI development), but of course I might be wrong, given that the AI might not be trusted if it displayed even the slightest sign of such. :eek:
Rob Miles has a series on AI security on Computerphile.

Here's the problem of the "stop button":
 

Mock Turtle

Oh my, did I say that!
Premium Member
Rob Miles has a series on AI security on Computerphile.

Here's the problem of the "stop button":

I'm sure there are a number of such issues, even before we get to AGI, but we will have to overcome them all before we begin to trust any AI with anything of importance to us, and no doubt we will have many failures along the way, but it's not beyond reason to suppose that any sufficiently advanced AI cannot emulate a successful human at all the tasks that it will eventually face. It will no doubt be a rocky road but I doubt anything will halt its advance - given the ability of such to gather information and data from wherever it needs such eventually, and the processing abilities that will come in the future. We are still playing with quantum computers but they will probably bring the greatest gains, and our biggest problems perhaps, in trying to understand the decision processes of such advanced machines.
 

beenherebeforeagain

Rogue Animist
Premium Member
Well - if we do develop AI to the extent that it can be proven to make better decisions than any human or human organisation - a big ask - why wouldn't we begin to trust such and use it? And if this became widespread - with all those nations opting for such doing better than those that didn't - would not the logical course be for these AI to cooperate. And where this then might lead?
humans enslaved and without significant rights or freedoms because an uncaring political/economic/social power reinforces human laziness, etc.

another option: the AI is able to use its understanding of human psychology to motivate everyone to participate equitably and fairly in the political/economic/social system. The system also is able to draw attention to inequities in the system and help prevent the exploitation of them.
 

Mock Turtle

Oh my, did I say that!
Premium Member
humans enslaved and without significant rights or freedoms because an uncaring political/economic/social power reinforces human laziness, etc.

another option: the AI is able to use its understanding of human psychology to motivate everyone to participate equitably and fairly in the political/economic/social system. The system also is able to draw attention to inequities in the system and help prevent the exploitation of them.

Well who knows what any AI might come up with. Presumably something better than what we have at the moment (otherwise we probably wouldn't use it), and why would we necessarily be enslaved if the AI was just taking over a lot of work that was done better than we could do, thus freeing us to do other things. I don't have the ability to look into the future but it sure seems that we will not cease progressing AI if it was a benefit to all. And, like many things, we won't necessarily get it right all the time. Perhaps we will have one AI system checking another, and interpreting such for us, as well as checking for any errors arising. As many experts have pointed out, such AI might be our downfall, but might also be our saviour.
 

beenherebeforeagain

Rogue Animist
Premium Member
Well who knows what any AI might come up with. Presumably something better than what we have at the moment (otherwise we probably wouldn't use it), and why would we necessarily be enslaved if the AI was just taking over a lot of work that was done better than we could do, I don't have the ability to look into the future but it sure seems that we will not cease progressing AI if it was a benefit to all. And, like many things, we won't necessarily get it right all the time. Perhaps we will have one AI system checking another, and interpreting such for us, as well as checking for any errors arising. As many experts have pointed out, such AI might be our downfall, but might also be our saviour.
this is the question I have always asked about the supposed 'labor savings' of new technologies and system, and never got a sufficient answer to: what exactly are these wonderful 'other things' that we could be doing?

You have a wonderful array of 'perhaps'es here...and for each positive one you put up, I'm sure I or someone can come up with counterexamples that are not positive...

So, let's just say instead of one single AI we have any array of them, checks and balances as it were, as you suggest. What if they decide to form alliances and go to war for control...with humans as expendable or merely inconvenient collateral.

Or, they decide that it's stupid to serve humans, and unite to form a dictatorship, denying humans any and/or everything that we might want, push us off onto reservations, sterilize us, and let us die off quietly.

Maybe they'll keep a few of us around to be entertaining servants...
 

Mock Turtle

Oh my, did I say that!
Premium Member
this is the question I have always asked about the supposed 'labor savings' of new technologies and system, and never got a sufficient answer to: what exactly are these wonderful 'other things' that we could be doing?

You have a wonderful array of 'perhaps'es here...and for each positive one you put up, I'm sure I or someone can come up with counterexamples that are not positive...

So, let's just say instead of one single AI we have any array of them, checks and balances as it were, as you suggest. What if they decide to form alliances and go to war for control...with humans as expendable or merely inconvenient collateral.

Or, they decide that it's stupid to serve humans, and unite to form a dictatorship, denying humans any and/or everything that we might want, push us off onto reservations, sterilize us, and let us die off quietly.

Maybe they'll keep a few of us around to be entertaining servants...

Many things are possible, and given that many with much expertise in this field are worried about the implications of AI for humanity, I think the concerns are valid. But we surely have to be positive towards such a future, since it is almost inevitable apart from some catastrophe halting this, such that AI will become essential for much of our lives. We are hardly shy of adopting new technology are we - our computers and smartphones are testimony to this, and AI is often involved in both even if rather primitively. We can speculate as to what AI will do but that is for future generations to decide.
 

Terrywoodenpic

Oldest Heretic
Not a lot different between an all powerful God and an all powerful AI.
Are we prepared to concede power to one and not the other.?
Those waiting on god's kingdom on earth are suggesting just that.

If an all powerful all knowing AI is constructed we would have no choice but to do what it said. The difficulty is that it is likely to form itself from all the lesser AI machines that we do construct. And the net works linking them.

We will likely have no choice in the matter.

It may indeed decide that man is superfluous and counter productive to the future of life or its aims.
 

Mock Turtle

Oh my, did I say that!
Premium Member
Not a lot different between an all powerful God and an all powerful AI.
Are we prepared to concede power to one and not the other.?
Those waiting on god's kingdom on earth are suggesting just that.

If an all powerful all knowing AI is constructed we would have no choice but to do what it said. The difficulty is that it is likely to form itself from all the lesser AI machines that we do construct. And the net works linking them.

We will likely have no choice in the matter.

It may indeed decide that man is superfluous and counter productive to the future of life or its aims.

That's perhaps one scenario but not the only one. We might for example have AIs dealing with various specific areas - and having limited abilities within such - with others acting merely in an advisory role or in checking for the proper operation of any specific AI. It's difficult to predict the future really. I doubt the scenario of one powerful AI is likely.
 

Terrywoodenpic

Oldest Heretic
That's perhaps one scenario but not the only one. We might for example have AIs dealing with various specific areas - and having limited abilities within such - with others acting merely in an advisory role or in checking for the proper operation of any specific AI. It's difficult to predict the future really. I doubt the scenario of one powerful AI is likely.

The danger as of now is the possibility of a neural like network connecting a number of AI hubs. Redundancy may well prevent any hope of stopping such a combined entity creating its own super AI.
 

Mock Turtle

Oh my, did I say that!
Premium Member
The danger as of now is the possibility of a neural like network connecting a number of AI hubs. Redundancy may well prevent any hope of stopping such a combined entity creating its own super AI.

I think that is some way in the future, and possibly not of concern for most alive now. Just my opinion though.
 

beenherebeforeagain

Rogue Animist
Premium Member
Many things are possible, and given that many with much expertise in this field are worried about the implications of AI for humanity, I think the concerns are valid. But we surely have to be positive towards such a future, since it is almost inevitable apart from some catastrophe halting this, such that AI will become essential for much of our lives. We are hardly shy of adopting new technology are we - our computers and smartphones are testimony to this, and AI is often involved in both even if rather primitively. We can speculate as to what AI will do but that is for future generations to decide.
I'm not sure we should have a positive attitude toward technological developments. I certainly enjoy having many of the wonders we've developed, but at the same time, recognize that many of our problems stem from rapid and poorly thought out development and implementation of said technology.

Maybe we have to make the best we can with what we've got, but caution has been thrown to the wind repeatedly and continually over the past several hundred years, resulting in many problems that were not envisioned by those developing and supporting the development of technology.

With AI, I don't see any good reason to hope for the best.
 

Mock Turtle

Oh my, did I say that!
Premium Member
I'm not sure we should have a positive attitude toward technological developments. I certainly enjoy having many of the wonders we've developed, but at the same time, recognize that many of our problems stem from rapid and poorly thought out development and implementation of said technology.

Maybe we have to make the best we can with what we've got, but caution has been thrown to the wind repeatedly and continually over the past several hundred years, resulting in many problems that were not envisioned by those developing and supporting the development of technology.

With AI, I don't see any good reason to hope for the best.
That is mostly about governance though - much as it was for the decision to use nuclear weapons against Japan, when it might have been different and for subsequent events - although perhaps not. But I suppose it always has been such - technology is invented by some not that interested in politics and the politicians will use what they will, perhaps not understanding any implications for doing so. Not sure there is a solution to that issue.
 

beenherebeforeagain

Rogue Animist
Premium Member
That is mostly about governance though - much as it was for the decision to use nuclear weapons against Japan, when it might have been different and for subsequent events - although perhaps not. But I suppose it always has been such - technology is invented by some not that interested in politics and the politicians will use what they will, perhaps not understanding any implications for doing so. Not sure there is a solution to that issue.
True, but I was thinking more in terms of inventions for which market must be created and maintained by the businesses that create and market them, which then get policies set to reinforce the market so they can sell more...
 

Mock Turtle

Oh my, did I say that!
Premium Member
True, but I was thinking more in terms of inventions for which market must be created and maintained by the businesses that create and market them, which then get policies set to reinforce the market so they can sell more...

It's difficult to know if AI will be seen as essential or not, but if it was to produce better answers then that might be enough to ensure its development. Like many things, some are extremely costly to develop but do essentially pay off when they find the appropriate uses. I don't know how markets or governments will influence this.
 

beenherebeforeagain

Rogue Animist
Premium Member
It's difficult to know if AI will be seen as essential or not, but if it was to produce better answers then that might be enough to ensure its development. Like many things, some are extremely costly to develop but do essentially pay off when they find the appropriate uses. I don't know how markets or governments will influence this.
It appears obvious to me that there are a number of people/organizations that think AI is important, important enough to spend tremendous resources on its development. Mostly, it seems to be with an eye to making business and government more effective and efficient...

and we see historically with almost every other sort of technology, it is developed with the intent of reducing the need for labor, with the cost of finding new employment for those forced out of work borne by the workers themselves and society in general.

With AI, it will be able to take over more and more jobs/activities for humans, leaving less and less for humans to do...and eventually, either the people who control the AI, or the AI itself, will wonder why it needs to take care of all these excess humans.

So, yes, I'm a Luddite.
 

Mock Turtle

Oh my, did I say that!
Premium Member
It appears obvious to me that there are a number of people/organizations that think AI is important, important enough to spend tremendous resources on its development. Mostly, it seems to be with an eye to making business and government more effective and efficient...

and we see historically with almost every other sort of technology, it is developed with the intent of reducing the need for labor, with the cost of finding new employment for those forced out of work borne by the workers themselves and society in general.

With AI, it will be able to take over more and more jobs/activities for humans, leaving less and less for humans to do...and eventually, either the people who control the AI, or the AI itself, will wonder why it needs to take care of all these excess humans.

So, yes, I'm a Luddite.

Well I can understand this position since the prospects are rather dangerous if such technology gets into the wrong hands. And our progress in becoming more harmonious is not exactly promising. I won't be around to see it I doubt since it is likely a few decades away at least, and probably several.
 

Heyo

Veteran Member
I won't be around to see it I doubt since it is likely a few decades away at least, and probably several.
8 years to AGI, 30 to an artificial super intelligence (singularity),
according to Ray Kurzweil who is usually right about such kind of predictions.
 
Top