Experts are split between concerns about future threats and present dangers. Both camps issued dire warnings. I was a technophile in my early teenage days, sometimes wishing that I had been born in 2090, rather than 1990, so that I could see all the incredible technology of the future. Lately, though, I’ve become far more sceptical about whether the technology that we interact with most is really serving us – or whether we are serving it.
![[Alexander Hurst]](https://i.guim.co.uk/img/uploads/2023/06/27/Alexander_Hurst_V2.png?width=180&dpr=1&s=none&crop=none)
So when I got an invitation to attend a conference on developing safe and ethical AI in the lead-up to the Paris AI summit, I was fully prepared to hear Maria Ressa, the Filipino journalist and 2021 Nobel peace prize laureate, talk about how big tech has, with impunity, allowed its networks to be flooded with disinformation, hate and manipulation in ways that have had very real, negative, impact on elections.
But I wasn’t prepared to hear some of the “godfathers of AI”, such as Yoshua Bengio, Geoffrey Hinton, Stuart Russell and Max Tegmark, talk about how things might go much farther off the rails. At the centre of their concerns was the race towards AGI (artificial general intelligence, though Tegmark believes the “A” should refer to “autonomous”) which would mean that for the first time in the history of life on Earth, there would be an entity other than human beings simultaneously possessing high autonomy, high generality and high intelligence, and that might develop objectives that are “misaligned” with human wellbeing. Perhaps it will come about as the result of a nation state’s security strategy, or the search for corporate profits at all costs, or perhaps all on its own.
“It’s not today’s AI we need to worry about, it’s next year’s,” Tegmark told me. “It’s like if you were interviewing me in 1942, and you asked me: ‘Why aren’t people worried about a nuclear arms race?’ Except they think they are in an arms race, but it’s actually a suicide race.”. It brought to mind Ronald D Moore’s 2003 reimagining of Battlestar Galactica, in which a public relations official shows journalists: “things that look odd, or even antiquated, to modern eyes, like phones with cords, awkward manual valves, computers that barely deserve the name”. “It was all designed to operate against an enemy that could infiltrate and disrupt all but the most basic computer systems … we were so frightened by our enemies that we literally looked backwards for protection.”.
Perhaps we need a new acronym, I thought. Instead of mutually assured destruction, we should be talking about “self-assured destruction” with an extra emphasis: SAD! An acronym that might even break through to Donald Trump. The idea that we, on Earth, might lose control of an AGI that then turns on us sounds like science fiction – but is it really so far-fetched considering the exponential growth of AI development? As Bengio pointed out, some of the most advanced AI models have already attempted to deceive human programmers during testing, both in pursuit of their designated objectives and to escape being deleted or replaced with an update.
When breakthroughs in human cloning were within scientists’ reach, biologists came together and agreed not to pursue it, says Stuart Russell, who literally wrote the textbook on AI. Similarly, both Tegmark and Russell favour a moratorium on the pursuit of AGI, and a tiered risk approach – stricter than the EU’s AI Act – where, just as with the drug approval process, AI systems in the higher-risk tiers would have to demonstrate to a regulator that they don’t cross certain red lines, such as being able to copy themselves on to other computers.
But even if the conference seemed weighted towards these future-driven fears, there was a fairly evident split among the leading AI safety and ethics experts from industry, academia and government in attendance. If the “godfathers” were worried about AGI, a younger and more diverse demographic were pushing to put an equivalent focus on the dangers that AIs already pose to climate and democracy.
We don’t have to wait for an AGI to decide, on its own, to flood the world with datacentres to evolve itself more quickly – Microsoft, Meta, Alphabet, OpenAI and their Chinese counterparts are already doing it. Or for an AGI to decide, on its own, to manipulate voters en masse in order to put politicians with a deregulation agenda into office – which, again, Donald Trump and Elon Musk are already pursuing. And even in AI’s current, early stages, its energy use is catastrophic: according to Kate Crawford, visiting chair of AI and justice at the École Normale Supérieur, data centres already account for more than 6% of all electricity consumption in the US and China, and demand is only going to keep surging.