AI progress is scaring people

TL;DR The Future of Life Institute calls for a 6-month AI moratorium.

A nuclear explosion mushroom cloud symbolising the explosion of AI capabilities.

Image Credit

“Pause Giant AI Experiments: An Open Letter … We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

— Future of Life Institute

The Future of Life Institute recently published an open letter calling on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4, the widely-used natural language processing system. This plea for a moratorium on AI development comes as the potential for AI to create unforeseen risks to society and the environment increases. The letter cites the need for a more thoughtful and deliberate approach to the development of AI systems so that researchers and industry can avoid the type of unforeseen consequences that could have catastrophic consequences. It is hoped that this request will help to ensure that AI is developed responsibly and that the potential harms of AI are minimized.

It is unlikely that prominent AI research organisations will follow the call to action from the Future of Life Institute to immediately pause the training of AI systems more powerful than GPT-4. This is because they would inevitably fall behind their competitors if they did so; AI research is an incredibly competitive field and any time that is lost in development could mean the difference between success and failure. Additionally, some organisations may not be willing to take the risk of investing in a temporary pause in development, as the potential benefits of AI are too great for them to ignore.

The open letter from the Future of Life Institute calling on all AI labs to pause the training of AI systems more powerful than GPT-4 was signed by many prominent individuals and organisations in the field of AI and tech. This includes notable researchers such as Yoshua Bengio and Stuart Russell, tech leaders such as Elon Musk, and visionaries such as Steve Wozniak. These high-profile signatories serve to underscore the urgency of the situation and the importance of taking the time to ensure that AI is developed in a responsible and ethical manner. It is hoped that these signatories will help to spread awareness of the risks posed by AI and encourage other organisations to take action.

You can read the open letter at

At the time of writing this article, the letter has 5507 signatories.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles –, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

— Future of Life Institute

We also support this letter and have signed it, However, we do expect this to be a token effort.

Notable signatories of the open letter

Here is a list of some noteworthy people who have also signed this open letter – there are too many to list them all and the list keeps growing, but this should give you an idea of how notable this open letter is.

Yoshua Bengio, Founder and Scientific Director at Mila, Turing Prize winner and professor at the University of Montreal

Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”

Elon Musk, CEO of SpaceX, Tesla & Twitter

Steve Wozniak, Co-founder of Apple

Emad Mostaque, CEO, Stability AI

Max Tegmark, MIT Center for Artificial Intelligence & Fundamental Interactions, Professor of Physics, president of Future of Life Institute

Grady Booch, ACM Fellow, IEEE Fellow, IEEE Computing Pioneer, IBM Fellow

See the full list here …