At the centre of their concerns was the race towards AGI (artificial general intelligence, though Tegmark believes the “A” should refer to “autonomous”) which would mean that for the first time in the history of life on Earth, there would be an entity other than human beings simultaneously possessing high autonomy, high generality and high intelligence, and that might develop objectives that are “misaligned” with human wellbeing.
Similarly, both Tegmark and Russell favour a moratorium on the pursuit of AGI, and a tiered risk approach – stricter than the EU’s AI Act – where, just as with the drug approval process, AI systems in the higher-risk tiers would have to demonstrate to a regulator that they don’t cross certain red lines, such as being able to copy themselves on to other computers.
And even in AI’s current, early stages, its energy use is catastrophic: according to Kate Crawford, visiting chair of AI and justice at the École Normale Supérieur, data centres already account for more than 6% of all electricity consumption in the US and China, and demand is only going to keep surging.
But I wasn’t prepared to hear some of the “godfathers of AI”, such as Yoshua Bengio, Geoffrey Hinton, Stuart Russell and Max Tegmark, talk about how things might go much farther off the rails.
The idea that we, on Earth, might lose control of an AGI that then turns on us sounds like science fiction – but is it really so far-fetched considering the exponential growth of AI development?