The Duke and Duchess of Sussex Join AI Pioneers in Calling for Prohibition on Superintelligent Systems

The Duke and Duchess of Sussex have joined forces with AI experts and Nobel laureates to push for a total prohibition on developing superintelligent AI systems.

The royal couple are part of the group of a influential declaration that calls for “a ban on the development of artificial superintelligence”. Superintelligent AI refers to AI systems that could exceed human intelligence in every intellectual area, though such systems have not yet been developed.

Primary Requirements in the Declaration

The declaration states that the ban should stay active until there is “widespread expert agreement” on creating superintelligence “safely and controllably” and once “strong public buy-in” has been secured.

Prominent figures who endorsed the statement include technology visionary and Nobel Prize recipient a leading AI researcher, along with his fellow “godfather” of contemporary artificial intelligence, Yoshua Bengio; tech entrepreneur Steve Wozniak; UK entrepreneur Virgin founder; Susan Rice; ex-head of state Mary Robinson, and UK writer a public intellectual. Other Nobel laureates who endorsed include a peace advocate, Frank Wilczek, John C Mather, and an economics expert.

Behind the Movement

The statement, targeted at governments, technology companies and policy makers, was coordinated by the FLI organization, a US-based AI safety group that previously called for a hiatus in advancing strong artificial intelligence in 2023, shortly after the emergence of ChatGPT made AI a worldwide public discussion topic.

Tech Sector Views

In July, Mark Zuckerberg, the chief executive of the social media giant, one of the leading tech companies in the United States, claimed that advancement toward superintelligent AI was “now in sight”. However, some analysts have argued that talk of ASI indicates competitive positioning among tech companies investing enormous sums on artificial intelligence recently, rather than the industry being close to achieving any technical breakthroughs.

Possible Dangers

Nonetheless, FLI warns that the possibility of ASI being achieved “within the next ten years” presents numerous risks ranging from eliminating all human jobs to erosion of personal freedoms, exposing countries to national security risks and even threatening humanity with extinction. Deep concerns about artificial intelligence focus on the possible capability of a system to escape human oversight and protective measures and trigger actions contrary to human interests.

Public Opinion

The institute published a American survey showing that about 75% of US citizens want robust regulation on sophisticated artificial intelligence, with six out of 10 believing that superhuman AI should not be created until it is demonstrated to be secure or manageable. The survey of American respondents noted that only a small fraction supported the status quo of rapid, uncontrolled advancement.

Industry Objectives

The top artificial intelligence firms in the United States, including the ChatGPT developer OpenAI and the search giant, have made the development of artificial general intelligence – the hypothetical condition where AI matches human levels of intelligence at many intellectual activities – an explicit goal of their work. While this is one notch below ASI, some specialists also caution it could carry an extinction threat by, for instance, being able to enhance its own capabilities toward reaching superintelligent levels, while also presenting an underlying danger for the modern labour market.

Amanda Scott
Amanda Scott

A tech enthusiast and writer passionate about innovation and storytelling, sharing insights from years of experience.