The other principles are: setting constraints on developing conscious AI systems; taking a phased approach to developing such systems; sharing findings with the public; and refraining from making misleading or overconfident statements about creating conscious AI.
“It may be the case that large numbers of conscious systems could be created and caused to suffer,” the researchers say, adding that if powerful AI systems were able to reproduce themselves it could lead to the creation of “large numbers of new beings deserving moral consideration”.
The paper, published in the Journal of Artificial Intelligence Research, also warned that a mistaken belief that AI systems are already conscious could lead to a waste of political energy as misguided efforts are made to promote their welfare.
In 2023, Sir Demis Hassabis, the head of Google’s AI programme and a Nobel prize winner, said AI systems were “definitely” not sentient currently but could be in the future.
AI systems could be ‘caused to suffer’ if consciousness achieved, says research Experts and thinkers signed open letter expressing concern over irresponsible development of technology.