Home > Media News >
Source: http://www.mashable.com
Mashable: In addressing these challenges, Altman and his co-authors propose three key ideas.
Imagine a world where artificial intelligence (AI) systems can do anything humans can do, but better and faster. Sounds amazing, right? But what if these super-smart machines also pose serious risks to our safety and society? How can we make sure they don’t harm us or take over the world?
That’s the question that Sam Altman, the CEO of OpenAI, and two other experts try to answer in a new article. They propose a plan for ‘the governance of superintelligence’, which means how to control and regulate AI systems that are more intelligent and capable than humans.
These systems are not science fiction. They already exist in some form, such as ChatGPT, Google Bard, and more. These are AI models that can generate text, images, music, and code.
They can also learn from data and improve themselves. In the next decade, they could become as powerful and productive as large companies.
Superintelligence could bring many benefits to humanity. It could solve problems like climate change, poverty, disease, and war. It could also create new opportunities for creativity, education, and entertainment.
But superintelligence also comes with big challenges. It could cause accidents, conflicts, or misuse. It could also threaten our values, rights, and freedoms. It could even surpass our understanding and control.
In addressing these challenges, Altman and his co-authors propose three key ideas. Firstly, they emphasize the importance of coordination, advocating for collective efforts to ensure the safety and beneficial outcomes of superintelligence. This entails establishing consensus among those involved in creating and utilizing AI systems and defining rules and limitations. Governments should also actively contribute by dedicating resources to a dedicated project or agency focused on superintelligence.
Secondly, they highlight the necessity for an international authority with the ability to oversee and regulate superintelligence. Drawing a parallel to the existing organization overseeing nuclear energy, this global entity would be responsible for monitoring the safety and security of AI systems. Additionally, it would have the authority to determine the appropriate utilization or cessation of these systems.
Lastly, Altman and his colleagues stress the significance of safety research in tackling the challenge of superintelligence. They emphasize the technical nature of this endeavor, which necessitates extensive research and innovation. Already underway, OpenAI and other organizations are actively engaged in exploring methodologies to ensure the safe development and implementation of superintelligence.
Altman also says that we should not stop or slow down the development of AI models that are not superintelligent yet. These models are useful and harmless for most purposes. They should be free to grow and improve without too much regulation.
But when it comes to superintelligence, we need to be careful and responsible. We need to involve everyone in deciding how to use it and what to do with it. We need to make sure it serves our interests and values.
Why is OpenAI building superintelligence in the first place? Altman says it’s because he believes it will make the world a better place. He also says it’s inevitable that someone will build it sooner or later. So he wants to make sure it’s done right.
Superintelligence is a big opportunity and a big challenge for humanity. We need to be ready for it.