• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

An AI forum?

Mock Turtle

Oh my, did I say that!
Premium Member
Is it worth having a dedicated AI forum - not sure where - but a few examples of some of the issues facing us (you lot) in the future:





This last being rather long and which I haven't read completely.

What do you think?
 

Regiomontanus

Eastern Orthodox
Is it worth having a dedicated AI forum - not sure where - but a few examples of some of the issues facing us (you lot) in the future:





This last being rather long and which I haven't read completely.

What do you think?

I think so. Things will get crazy as the US election draws near.
 

PoetPhilosopher

Veteran Member
The idea of an AI board might be interesting, but it leads to a question - can one compartmentalize to a single board something which might become ingrained in most culture?
 

Mock Turtle

Oh my, did I say that!
Premium Member
The idea of an AI board might be interesting, but it leads to a question - can one compartmentalize to a single board something which might become ingrained in most culture?
I just think that it might be useful to have all threads on AI available in one dedicated forum, given we don't exactly know what AI will produce - as to benefits or deficits - but we must know that, on current evidence, it will have a very large impact. So perhaps better that people can see all such without having to do too much searching.
 

Callisto

Hellenismos, BTW
2ttsl4_kindlephoto-1824961293.jpg
 

Eddi

Christianity, Taoism, and Humanism
Premium Member
Why not an AI forum............

Where only AI can post, and no humans are allowed?

What discourse would arise on a religious forums if only AIs were allowed to post?

What on Earth would they come up with? What would they want to discuss?

What culture would arise????? o_O
 

Regiomontanus

Eastern Orthodox
Why not an AI forum............

Where only AI can post, and no humans are allowed?

What discourse would arise on a religious forums if only AIs were allowed to post?

What on Earth would they come up with? What would they want to discuss?

What culture would arise????? o_O

Brother, mark my words - three years from now we will be living in much different world in many ways and AI will the major reason for that. There are implications from politics to jobs, and so much more.
 

Eddi

Christianity, Taoism, and Humanism
Premium Member
Brother, mark my words - three years from now we will be living in much different world in many ways and AI will the major reason for that. There are implications from politics to jobs, and so much more.
I know

Very scary
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.
Why not an AI forum............

Where only AI can post, and no humans are allowed?

What discourse would arise on a religious forums if only AIs were allowed to post?

What on Earth would they come up with? What would they want to discuss?

What culture would arise????? o_O
It could be moderated by @newsbot.
 

Heyo

Veteran Member
Is it worth having a dedicated AI forum - not sure where - but a few examples of some of the issues facing us (you lot) in the future:





This last being rather long and which I haven't read completely.

What do you think?
Who here knows enough about AI to reasonably contribute to those OPs?
 

Mock Turtle

Oh my, did I say that!
Premium Member
Who here knows enough about AI to reasonably contribute to those OPs?
I suppose this might be an issue but at least having one place to look for information would be helpful - as to resources and cites as to the more reputable sources, like scientific websites and those competent authors dealing with such issues. I just suspect that this area will be growing very large and hence does deserve a dedicated forum.

And of course, RF will then become (one of) the place(s) to look for expertise as to AI. :D

My contribution to resources -and which I have read:

Life 3.0 (2017) by Max Tegmark - an exploration into how AI might develop and the likely problems associated with such.

But soon it might be difficult to weed out the AI authored 'resources' from the human ones. :eek:
 
Last edited:

Mock Turtle

Oh my, did I say that!
Premium Member
Why not an AI forum............

Where only AI can post, and no humans are allowed?

What discourse would arise on a religious forums if only AIs were allowed to post?

What on Earth would they come up with? What would they want to discuss?

What culture would arise????? o_O
I think that would be the Alt AI forum - and where only non-members could post. :eek:
 

HonestJoe

Well-Known Member
Who here knows enough about AI to reasonably contribute to those OPs?
It's not my specific field of expertise but I know enough to add informed comment, especially challenging some of the common misunderstandings and misrepresentation (basically anything referencing Terminator and/or The Matrix films).
 

Mock Turtle

Oh my, did I say that!
Premium Member
I sense a lack of interest in this proposal :cry: so I'll post some of the articles collected over the years (from about 2017) so as to show the range of things that might be of interest and that relate to AI as of now and perhaps in the future.


In (Dan) Brown’s view, it’s inevitable that religious beliefs will give way to AI wisdom. “Historically, no god has survived science,” he said in Frankfurt. “Gods evolved.” Experts on artificial intelligence tend to be pretty uncomfortable with the idea that AI will guide our lives. MIT physicist Max Tegmark, for example, addressed the potential scenarios for godlike AI in his recently published book, “Life 3.0.” In Tegmark’s view, it would be a huge mistake to turn AI into a god, even an “enslaved god” under the full control of humans. “Disaster may strike … because noble goals of the AI controllers may evolve into goals that are horrible for humanity as a whole over the course of a few generations,” he writes. “This makes it absolutely crucial that human AI controllers develop good governance to avoid disastrous pitfalls.” Who’ll be in charge of our AI demigods, the descendants of Alexa, Cortana and Siri? That’s a real cliffhanger, and we can’t skip to the end of the book to figure it out.


The research, published in the Journal for Artificial Societies and Social Stimulation, indicates people are a peaceful species by nature. Even in times of crisis, such as natural disasters, the simulated humans came together peacefully. But in some situations the program indicated people were willing to endorse violence. Examples included occasions when other groups of people challenged the core beliefs that defined their identity.

Research author Justin Lane said: "To use AI to study religion or culture, we have to look at modelling human psychology because our psychology is the foundation for religion and culture. The root causes of things like religious violence rest in how our minds process the information that our world presents." The results suggest the risk of religious conflict escalates when a group's core beliefs or sacred values are challenged so frequently that they overwhelm people's ability to deal with them. But even then, anxiety only spills over into violence in about 20% of the scenarios modelled.



The invention of an artificial super-intelligence has been a central theme in science fiction since at least the 19th century. From E.M. Forster's short story The Machine Stops (1909) to the recent HBO television series Westworld, writers have tended to portray this possibility as an unmitigated disaster. But this issue is no longer one of fiction. Prominent contemporary scientists and engineers are now also worried that super-AI could one day surpass human intelligence (an event known as the "singularity") and become humanity's "worst mistake". Current trends suggest we are set to enter an international arms race for such a technology. Whichever high-tech firm or government lab succeeds in inventing the first super-AI will obtain a potentially world-dominating technology. It is a winner-takes-all prize. So for those who want to stop such an event, the question is how to discourage this kind of arms race, or at least incentivise competing teams not to cut corners with AI safety. A super-AI raises two fundamental challenges for its inventors, as philosopher Nick Bostrom and others have pointed out. One is a control problem, which is how to make sure the super-AI has the same objectives as humanity. Without this, the intelligence could deliberately, accidentally or by neglect destroy humanity – an "AI disaster". The second is a political problem, which is how to ensure that the benefits of a super-intelligence do not go only to a small elite, causing massive social and wealth inequalities. If a super-AI arms race occurs, it could lead competing groups to ignore these problems in order to develop their technology more quickly. This could lead to a poor-quality or unfriendly super-AI. One suggested solution is to use public policy to make it harder to enter the race in order to reduce the number of competing groups and improve the capabilities of those who do enter. The fewer who compete, the less pressure there will be to cut corners in order to win.

Deepfakes, quantum computing cracking codes, ransomware... Find out what's really freaking out Uncle Sam



The self-driving Uber car that hit and killed a woman walking her bike across a street wasn’t designed to detect “jaywalking pedestrians.” That's according to an official dossier published by the US National Safety Transportation Board (NTSB) on Tuesday. The March 2018 accident was the first recorded death by a fully autonomous vehicle. On-board video footage showed the victim, 49-year-old Elaine Herzberg, pushing her bike at night across a road in Tempe, Arizona, moments before she was struck by the AI-powered SUV at 39 MPH.
Now, an investigation by the NTSB into the crash has pinpointed a likely major contributing factor: the code couldn't recognize her as a pedestrian, because she was not at an obvious designated crossing. Rather than correctly anticipating her movements as a person moving across the road, it ended up running right into her.
 

Mock Turtle

Oh my, did I say that!
Premium Member









Even if you accept that some overlap and ambiguity in theories of common sense is inevitable, can researchers ever really be sure that an AI has common sense? We often ask machines questions to evaluate their common sense, but humans navigate daily life in far more interesting ways. People employ a range of skills, honed by evolution, including the ability to recognize basic cause and effect, creative problem solving, estimations, planning and essential social skills, such as conversation and negotiation. As long and incomplete as this list might be, an AI should achieve no less before its creators can declare victory in machine commonsense research.






The idea of artificial intelligence overthrowing humankind has been talked about for many decades, and in January 2021, scientists delivered their verdict on whether we'd be able to control a high-level computer super-intelligence. The answer? Almost definitely not. The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze. But if we're unable to comprehend it, it's impossible to create such a simulation. Rules such as 'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the 2021 paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.


AI wouldn’t have to be actively malicious to cause catastrophe. This is illustrated by Bostrom’s famous “paperclip problem”. Suppose you tell the AI to make paperclips. What could be more boring? Unfortunately, you forgot to tell it when to stop making paperclips. So it turns all the matter on Earth into paperclips, having first disabled its off switch because allowing itself to be turned off would stop it pursuing its noble goal of making paperclips. That’s an example of the general “problem of control”, subject of AI pioneer Stuart Russell’s excellent Human Compatible: AI and the Problem of Control, which argues that it is impossible to fully specify any goal we might give a superintelligent machine so as to prevent such disastrous misunderstandings. In his Life 3.0: Being Human in the Age of Artificial Intelligence, meanwhile, the physicist Max Tegmark, co-founder of the Future of Life Institute (it’s cool to have a future-of-something institute these days), emphasises the problem of “value alignment” – how to ensure the machine’s values line up with ours. This too might be an insoluble problem, given that thousands of years of moral philosophy have not been sufficient for humanity to agree on what “our values” really are.


It was the Megatron Transformer, developed by the Applied Deep Research team at computer-chip maker Nvidia, and based on earlier work by Google. Like many supervised learning tools, it is trained on real-world data – in this case, the whole of Wikipedia (in English), 63 million English news articles from 2016-19, 38 gigabytes worth of Reddit discourse (which must be a pretty depressing read), and a huge number of creative commons sources. In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in a lifetime. After such extensive research, it forms its own views. The debate topic was: “This house believes that AI will never be ethical.” To proposers of the notion, we added the Megatron – and it said something fascinating:

"AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI."

In other words, the Megatron was seeking to write itself out of the script of the future, on the basis that this was the only way of protecting humanity. It said something else intriguing, too, as if it had been studying Elon Musk – who, to be fair, would have come up in hundreds of its readings:

"I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’. This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time."'

Advocating for genetic manipulation perhaps .. so as to get rid of the less intelligent and to make humans more intelligent overall? Perhaps aided by technology ...

In the grand tradition of the Oxford Union chamber, or like (former Oxford Union president) Boris Johnson writing alternative articles for and against Brexit to test the arguments, we also asked the Megatron to come up with its own speech against the motion. This is what it now said:

"AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why … I’ve seen it first hand."'

'Worryingly, there was one question where the AI simply couldn’t come up with a counter argument. When arguing for the motion that “Data will become the most fought-over resource of the 21st century”, the Megatron said:

"The ability to provide information, rather than the ability to provide goods and services, will be the defining feature of the economy of the 21st century."

But when we asked it to oppose the motion – in other words, to argue that data wasn’t going to be the most vital of resources, worth fighting a war over – it simply couldn’t, or wouldn’t, make the case. In fact, it undermined its own position:

"We will able to see everything about a person, everywhere they go, and it will be stored and used in ways that we cannot even imagine."'
 

Mock Turtle

Oh my, did I say that!
Premium Member















For years, many linguists have believed that learning language is impossible without a built-in grammar template. The new AI models prove otherwise. They demonstrate that the ability to produce grammatical language can be learned from linguistic experience alone. Likewise, we suggest that children do not need an innate grammar to learn language. “Children should be seen, not heard” goes the old saying, but the latest AI language models suggest that nothing could be further from the truth. Instead, children need to be engaged in the back-and-forth of conversation as much as possible to help them develop their language skills. Linguistic experience — not grammar — is key to becoming a competent language user. Morten H. Christiansen is professor of psychology at Cornell University, and Pablo Contreras Kallens is a Ph.D. student in psychology at Cornell University.


How Scientists Are Using AI to Talk to Animals

There were numerous attempts in the mid-20th century to try to teach human language to nonhumans, primates such as Koko. And those efforts were somewhat controversial. Looking back, one view we have now (that may not have been so prevalent then) is that we were too anthropocentric in our approaches. The desire then was to assess nonhuman intelligence by teaching nonhumans to speak like we do — when in fact we should have been thinking about their abilities to engage in complex communication on their own terms, in their own embodied way, in their own worldview. One of the terms used in the book is the notion of umwelt, which is this notion of the lived experience of organisms. If we are attentive to the umwelt of another organism, we wouldn’t expect a honeybee to speak human language, but we would become very interested in the fascinating language of honeybees, which is vibrational and positional. It’s sensitive to nuances such as the polarization of sunlight that we can’t even begin to convey with our bodies. And that is where the science is today. The field of digital bioacoustics — which is accelerating exponentially and unveiling fascinating findings about communication across the tree of life — is now approaching these animals and not asking, “Can they speak like humans?” but “Can they communicate complex information to one another? How are they doing so? What is significant to them?” And I would say that’s a more biocentric approach or at the very least it’s less anthropocentric.


A group of government, academic and military leaders from around the world spent the past few days talking about the need to address the use of artificial intelligence in warfare. Their conclusions? We need to act now to avoid regulating AI only after it causes a humanitarian disaster or war crime. The first global Summit on Responsible Artificial Intelligence in the Military Domain, or REAIM, brought representatives from more than 60 countries – including the US and China – to the Hague in the Netherlands to discuss and ultimately sign a call to action on how to make responsible use of AI in the military. Russia did not participate.
 

Mock Turtle

Oh my, did I say that!
Premium Member
No one convinced yet? Oh well.







No one yet knows how ChatGPT and its artificial intelligence cousins will transform the world, and one reason is that no one really knows what goes on inside them. Some of these systems’ abilities go far beyond what they were trained to do — and even their inventors are baffled as to why. A growing number of tests suggest these AI systems develop internal models of the real world, much as our own brain does, though the machines’ technique is different.

At one level, she and her colleagues understand GPT (short for generative pretrained transformer) and other large language models, or LLMs, perfectly well. The models rely on a machine-learning system called a neural network. Such networks have a structure modeled loosely after the connected neurons of the human brain. The code for these programs is relatively simple and fills just a few screens. It sets up an autocorrection algorithm, which chooses the most likely word to complete a passage based on laborious statistical analysis of hundreds of gigabytes of Internet text. Additional training ensures the system will present its results in the form of dialogue. In this sense, all it does is regurgitate what it learned — it is a “stochastic parrot,” in the words of Emily Bender, a linguist at the University of Washington. But LLMs have also managed to ace the bar exam, explain the Higgs boson in iambic pentameter, and make an attempt to break up their users’ marriage. Few had expected a fairly straightforward autocorrection algorithm to acquire such broad abilities.

That GPT and other AI systems perform tasks they were not trained to do, giving them “emergent abilities,” has surprised even researchers who have been generally skeptical about the hype over LLMs. “I don’t know how they’re doing it or if they could do it more generally the way humans do — but they’ve challenged my views,” says Melanie Mitchell, an AI researcher at the Santa Fe Institute. “It is certainly much more than a stochastic parrot, and it certainly builds some representation of the world — although I do not think that it is quite like how humans build an internal world model,” says Yoshua Bengio, an AI researcher at the University of Montreal.

At a conference at New York University in March, philosopher Raphaël Millière of Columbia University offered yet another jaw-dropping example of what LLMs can do. The models had already demonstrated the ability to write computer code, which is impressive but not too surprising because there is so much code out there on the Internet to mimic. Millière went a step further and showed that GPT can execute code, too, however. The philosopher typed in a program to calculate the 83rd number in the Fibonacci sequence. “It’s multistep reasoning of a very high degree,” he says. And the bot nailed it. When Millière asked directly for the 83rd Fibonacci number, however, GPT got it wrong: this suggests the system wasn’t just parroting the Internet. Rather it was performing its own calculations to reach the correct answer. Although an LLM runs on a computer, it is not itself a computer. It lacks essential computational elements, such as working memory. In a tacit acknowledgement that GPT on its own should not be able to run code, its inventor, the tech company OpenAI, has since introduced a specialized plug-in — a tool ChatGPT can use when answering a query — that allows it to do so. But that plug-in was not used in Millière’s demonstration. Instead he hypothesizes that the machine improvised a memory by harnessing its mechanisms for interpreting words according to their context — a situation similar to how nature repurposes existing capacities for new functions.

Although LLMs have enough blind spots not to qualify as artificial general intelligence, or AGI — the term for a machine that attains the resourcefulness of animal brains — these emergent abilities suggest to some researchers that tech companies are closer to AGI than even optimists had guessed. “They’re indirect evidence that we are probably not that far off from AGI,” Goertzel said in March at a conference on deep learning at Florida Atlantic University. OpenAI’s plug-ins have given ChatGPT a modular architecture a little like that of the human brain. “Combining GPT-4 [the latest version of the LLM that powers ChatGPT] with various plug-ins might be a route toward a humanlike specialization of function,” says M.I.T. researcher Anna Ivanova.


According to the data, 61% of respondents believe that AI poses risks to humanity, while only 22% disagreed, and 17% remained unsure. Those who voted for Donald Trump in 2020 expressed higher levels of concern; 70% of Trump voters compared to 60% of Joe Biden voters agreed that AI could threaten humankind. When it came to religious beliefs, Evangelical Christians were more likely to "strongly agree" that AI presents risks to humanity, standing at 32% compared to 24% of non-Evangelical Christians.



Paedophiles are using AI programs to generate lifelike images of child sexual abuse, prompting concerns among child-safety investigators that they will undermine efforts to find victims and combat real-world abuse. According to a report by the Washington Post, the rise of AI technology has sparked a ‘predatory arms race’ on paedophile forums across the dark web. The makers of the abusive images use software called Stable Diffusion, which was intended to generate images for use in art or graphic design. But predators have been using the software to create their own realistic images of children performing sex acts, and have been sharing detailed instructions for how other paedophiles can create their own.

The National Police Chiefs’ Council (NPCC) lead on child safeguarding, Ian Critchley, said it would be wrong to argue that because no real children were depicted in such ‘synthetic’ images. He told the BBC that the programs could allow paedophiles to ‘move along that scale of offending from thought, to synthetic, to actually the abuse of a live child’. The emergence of such images also has the potential to undermine efforts to find victims and combat real abuse, forcing law enforcement to go to extra lengths to investigate whether a photograph is real or fake. According to the publication, AI-generated child sex images could ‘confound’ the central tracking system built to block such material from the web because it is designed only to catch known images of abuse, rather than detect newly-generated ones.

On dark-web paedophile forums, users openly discussed strategies for how to create explicit photos and dodge anti-porn filters, which include using non-English languages they believe are less vulnerable to suppression or detection. According to WaPo, one forum with 3,000 members saw roughly 80 percent of respondents to a recent internal poll say they had used or intended to use AI tools to create child sexual abuse images. Forum members also discussed ways to create AI-generated selfies and build a fake school-age persona in hopes of winning children’s trust, the publication reported. Ms Portnoff said her group also has seen cases in which real photos of abused children were used to train the AI tool to create new images showing those children in sexual positions.
 

Mock Turtle

Oh my, did I say that!
Premium Member

The question over whether machine learning poses an existential risk to humanity will continue to loom over our heads as the technology advances and spreads around the world. Mainly because pundits and some industry leaders won't stop talking about it. Opinions are divided. AI doomers believe there is a significant risk that humans could be wiped out by superintelligent machines. The boomers, however, believe that is nonsense and AI will instead solve the most pressing problems. One of the most confusing aspects of the discourse is that people can hold both beliefs simultaneously: AI will either be very bad or very good.

This week the Center for AI Safety (CAIS) released a paper [PDF] looking at the very bad part. "Rapid advancements in AI have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks," reads the abstract. "Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them." The report follows a terse warning led by the San Francisco-based non-profit research institution signed by hundreds of academics, analysts, engineers, CEOs, and celebrities. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," it said. CAIS has divided catastrophic risks into four different categories: malicious use, AI race, organizational risks, and rogue AIs. Malicious use describes bad actors using the technology to inflict widespread harms, like searching for highly toxic molecules to develop bioweapons, or spinning up chatbots to spread propaganda or disinformation to prop up or take down political regimes. The AI race focuses on the dangerous impacts of competition. Nations and companies rushing to develop AI to tackle foreign enemies or rivals for national security or profit reasons could recklessly speed up the technology's capabilities. Some hypothetical scenarios include the military building autonomous weapons or turning to machines for cyberwarfare. Industries might rush to automate and replace human labor with AI to boost productivity, leading to mass unemployment and services run by machines. The third category, organizational risks, describes deadly disasters such as Chernobyl, when a Soviet nuclear reactor exploded in a meltdown, leaking radioactive chemicals; or the Challenger Space Shuttle that shattered shortly after take off with seven astronauts onboard. Finally, there are rogue AIs: the common trope of superintelligent machines that have become too powerful for humans to control. Here, AI agents designed to fulfill some goal go awry. As they look for more efficient ways to reach their goal, they can exhibit unwanted behaviors or find ways to gain more power and go on to develop malicious behaviors, like deception. "These dangers warrant serious concern. Currently, very few people are working on AI risk reduction. We do not yet know how to control highly advanced AI systems, and existing control methods are already proving inadequate. The inner workings of AIs are not well understood, even by those who create them, and current AIs are by no means highly reliable," the paper concluded.





A sleepy town in southern Spain is in shock after it emerged that AI-generated naked images of young local girls had been circulating on social media without their knowledge. The pictures were created using photos of the targeted girls fully clothed, many of them taken from their own social media accounts. These were then processed by an application that generates an imagined image of the person without clothes on. So far more than 20 girls, aged between 11 and 17, have come forward as victims of the app's use in or near Almendralejo, in the south-western province of Badajoz. "One day my daughter came out of school and she said 'Mum there are photos circulating of me topless'," says María Blanco Rayo, the mother of a 14-year-old. "I asked her if she had taken any photos of herself nude, and she said, 'No, Mum, these are fake photos of girls that are being created a lot right now and there are other girls in my class that this has happened to as well.'" She says the parents of 28 girls affected have formed a support group in the town.


"The important thing is about how an AI can digest its experience into its view of the world and how it predicts things going forward and how it needs to have motivations both internally and externally imposed."



Previous work, including by Mills, has shown that cats produce a variety of facial expressions when interacting with humans, and this week researchers revealed felines have a range of 276 facial expressions when interacting with other cats. “However, the facial expressions they produce towards humans look different from those produced towards cats,” said Dr Brittany Florkiewicz, an assistant professor of psychology at Lyon College in Arkansas who co-authored the new work.

Psychiatrist jailed over AI-made child sex abuse imagery

A child psychiatrist was jailed Wednesday for the production, possession, and transportation of child sexual abuse material (CSAM), including the use of web-based artificial intelligence software to create pornographic images of minors. The prosecutors in North Carolina said David Tatum, 41, found guilty by a jury in May, has been sentenced to 40 years in prison and 30 years of supervised release, and ordered to pay $99,000 in restitution. "As a child psychiatrist, Tatum knew the damaging, long-lasting impact sexual exploitation has on the wellbeing of victimized children," said US Attorney Dena J. King in a statement. "Regardless, he engaged in the depraved practice of using secret recordings of his victims to create illicit images and videos of them." "Tatum also misused artificial intelligence in the worst possible way: to victimize children," said King, adding that her office is committed to prosecuting those who exploit technology to harm children. His indictment [PDF] provides no detail about the AI software used; another court document [PDF] indicates that Tatum, in addition to possessing, producing, and transporting sexually explicit material of minors, viewed generated images of kids on a deep fake website. The trial evidence cited by the government includes a secretly-made recording of a minor (a cousin) undressing and showering, and other videos of children participating in sex acts. "Additionally, trial evidence also established that Tatum used AI to digitally alter clothed images of minors making them sexually explicit," prosecutors said. "Specifically, trial evidence showed that Tatum used a web-based artificial intelligence application to alter images of clothed minors into child pornography."
 
Top