Stop Artificial Intelligence! 1000 experts call for a pause in AI training

'Pause Giant AI Experiments: An Open Letter'
The Future of Life Institute
Risk to humanity?
Are things getting out of control?
Illustrious signatories
Security protocols
Government intervention?
Powerful tool
Too many bugs (for now)
Hallucinations
Limiting risks
'AI Summer'
Slow things down and do it right
Not everyone agrees
Complicated application
And now what?
'Pause Giant AI Experiments: An Open Letter'

'Pause Giant AI Experiments: An Open Letter',  is the title of the letter that more than 1,000 technology and business leaders have signed, among them Elon Musk.

The Future of Life Institute

In the letter, issued by the Future of Life Institute, they ask the laboratories to stop the development of the current most advanced Artificial Intelligence systems.

 

Risk to humanity?

The letter warns, "AI systems with human-competitive intelligence can pose profound risks to society and humanity."

Are things getting out of control?

The authors of the letter note that  "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one - not even their creators - can understand, predict, or reliably control".

"A misaligned superintelligent AGI could cause grievous harm"

The letter also quotes a blog post in which OpenAI warns the world of the risks  if an artificial general intelligence (AGI) were developed recklessly: "A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that, too."

Illustrious signatories

Among the signatories are illustrious names such as Elon Musk, Steve Wozniak (co-founder of Apple, in the photo) or Craig Peters (CEO of Getty Images).

Security protocols

Among the proposals made by the open letter is to "introduce security protocols" for AI systems, adding that in order to apply these protocols, it is necessary to stop the development of AI more powerful than GPT-4.

Government intervention?

"If such a pause cannot be quickly enacted, governments should step in and institute a moratorium," they requested.

Photo: Unsplash - Andy Kelly

Powerful tool

The authors request that the focus of AI development change, writing, "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal."

Photo: Unsplash - Boitumelo Phetla

 

Too many bugs (for now)

The tricky thing, they say, is that these tools have been flawed in detail, along with a tremendous ability to spread misinformation.

Photo: Unsplash - Drew Dizzy Graham

Hallucinations

Among the risks to which the open letter alleges are the so-called 'Hallucinations', which are information and data offered by the AI that seem completely reliable but are actually erroneous.

Photo: Unsplash - Maximalfocus

Limiting risks

Basically, the signatories request a sustainable and organic development of AI, developing it "only once we are confident that their effects will be positive and their risks will be manageable."

Photo: Unsplash - Jonathan Kemper

'AI Summer'

Nonetheless, the authors point out "Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards."

Photo: Unsplash - Lukas

Slow things down and do it right

The authors hope that by slowing down we can avoid the risks of an "AI Fall" stating, "Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall."

Photo: Unsplash - Markus Spiske

Not everyone agrees

Obviously, there are those who have not wanted to sign the letter, as is the case of Sam Altman (pictured), executive director of OpenAI.

Complicated application

This refusal leads the signatories to think that the technological community will not accept this moratorium, which adds in the complicated option of governments legislating Artificial Intelligence.

Photo: Unsplash - Nguyen Dang Hoang Nhu

And now what?

While waiting to see the impact of this letter, it is possible that the number of signatories will increase in the coming weeks, as well as those who oppose it will also increase in number. We will just have to wait and see what happens, the outcome of which could have a profound affect on society.

Photo: Unsplash - Possessed Photography

 

More for you