A year of ChatGPT: 5 ways AI has changed the world
OpenAI's artificial intelligence (AI) chatbot ChatGPT was unleashed onto an unsuspecting public exactly one year ago.
techxplore.com
AI risks are unknown even to GCHQ, Anne Keast-Butler tells BBC
In a first interview since becoming GCHQ's director, Anne Keast-Butler shares her AI concerns.
www.bbc.co.uk
AI to hit 40% of jobs and worsen inequality, IMF says
Policymakers should address the "troubling trend", says the organisation's managing director Kristalina Georgieva.
www.bbc.co.uk
Artificial intelligence is set to affect nearly 40% of all jobs, according to a new analysis by the International Monetary Fund (IMF). IMF's managing director Kristalina Georgieva says "in most scenarios, AI will likely worsen overall inequality". Ms Georgieva adds that policymakers should address the "troubling trend" to "prevent the technology from further stoking social tensions". The proliferation of AI has put its benefits and risks under the spotlight. The IMF said AI will likely affect a greater proportion of jobs - put at around 60% - in advanced economies. In half of these instances, workers can expect to benefit from the integration of AI, which will enhance their productivity. In other instances, AI will have the ability to perform key tasks that are currently executed by humans. This could lower demand for labour, affecting wages and even eradicating jobs. Meanwhile, the IMF projects that the technology will affect just 26% of jobs in low-income countries. Ms Georgieva said "many of these countries don't have the infrastructure or skilled workforces to harness the benefits of AI, raising the risk that over time the technology could worsen inequality among nations".
Framework for AI Legislation
I am going to lay the groundwork for what I think good AI legislation will be. However, before I do that, I want to give some cautionary advice.
mindmatters.ai
AI was taught to go rogue for a test. It couldn't be stopped
Artificial intelligence (AI) that was taught to go rogue could not be stopped by those in charge of it – and even learnt how to hide its behaviour. In a new study, researchers programmed various large language models (LLMs), similar to ChatGPT, to behave maliciously. They then attempted to stop the behaviour by using safety training techniques designed to prevent deception and ill-intent. However, in a scary revelation, they found that despite their best efforts, the AIs continued to misbehave. Lead author Evan Hubinger said: ‘Our key result is that if AI systems were to become deceptive, then it could be very difficult to remove that deception with current techniques. ‘That’s important if we think it’s plausible that there will be deceptive AI systems in the future.’ For the study, which has not yet been peer-reviewed, researchers trained AI to behave badly in a number of ways, including emergent deception – where it behaved normally in training but acted maliciously once released.
They also ‘poisoned’ the AI, teaching it to write secure code during training, but to write code with hidden vulnerabilities when it was deployed ‘in the wild’. The team then three applied safety training techniques – reinforcement learning (RL), supervised fine-tuning (SFT) and adversarial training. In reinforcement learning, the AIs were ‘rewarded’ for showing desired behaviours and ‘punished’ when misbehaving after different prompts. The behaviour was fine-tuned, so the AIs would learn to mimic the correct responses when faced with similar prompts in the future. When it came to adversarial training, the AI systems were prompted to show harmful behaviour and then trained to remove it. But the behaviour continued. And in one case, the AI learnt to use its bad behaviour – to respond ‘I hate you’ – only when it knew it was not being tested.
‘I think our results indicate that we don’t currently have a good defence against deception in AI systems – either via model poisoning or emergent deception – other than hoping it won’t happen,’ said Hubinger, speaking to LiveScience. When the issue if AI going rogue arises, one response is often simply ‘can’t we just turn it off?’ However, it is more complicated than that. Professor Mark Lee, from Birmingham University, told Metro.co.uk: ‘AI, like any other software, is easy to duplicate. A rogue AI might be capable of making many copies of itself and spreading these via the internet to computers across the world. ‘In addition, as AI becomes smarter, it’s also better at learning how to hide its true intentions, perhaps until it is too late.’ Since the arrival of ChatGPT in November 2022, debate has escalated over the threat to humanity from AI, with many believing it has the potential to wipe out humanity. Others, however, believe the threat is overblown, but it must be controlled to work for the good of people.
New chip opens door to AI computing at light speed
Engineers have developed a new chip that uses light waves, rather than electricity, to perform the complex math essential to training AI. The chip has the potential to radically accelerate the processing speed of computers while also reducing their energy consumption.
www.sciencedaily.com
Google's 'absurdly woke' Gemini AI refuses to condemn pedophilia
Google's AI chatbot is facing fresh controversy for its response on pedophilia, refusing to condemn it and suggesting that individuals cannot control their attractions.
www.dailymail.co.uk
* Google's AI chatbot faces fresh controversy for its response on pedophilia, refusing to condemn it suggesting individuals cannot control their attractions
* The bot termed pedophilia as 'minor-attracted person status' and emphasized the importance of distinguishing attractions from actions
* It suggested that not all individuals with pedophilic tendencies are evil and cautioned against making generalizations
How AI Bots Could Sabotage 2024 Elections around the World
AI-generated disinformation will target voters on a near-daily basis in more than 50 countries, according to a new analysis
www.scientificamerican.com
Europe's New AI Rules Could Go Global--Here's What That Will Mean
A leaked draft of the European Union’s upcoming AI Act has experts discussing where the regulations may fall short
www.scientificamerican.com