The establishment of OpenAI’s “Preparedness Framework” has been announced. To mitigate the “catastrophic risks” that come with advanced artificial intelligence systems, the framework calls for the establishment of a specialised team to evaluate such hazards.
OpenAI said in a blog post on December 18 that its recently formed “Preparedness Team” will act as a liaison between the organization’s safety and policy departments. The role of this team is to act as a kind of checks and balances system to guard against any risks related to increasingly sophisticated AI models. It was said by OpenAI that it would not release its technology until the safety team certified it as secure.
The Preparedness Team will examine the safety reports under this new structure before forwarding them to the OpenAI board and corporate management. The board now has the power to reverse safety choices if necessary, even though the executives are still in charge of making final decisions.
This announcement comes after OpenAI recently underwent organisational changes, including the dismissal of Sam Altman as CEO and his subsequent reinstatement. After Altman’s return, the new board was presented, with Adam D’Angelo, Larry Summers, and Bret Taylor serving as chair.
In November 2022, OpenAI released ChatGPT to the general public, igniting interest in AI’s potential benefits as well as worries about its risks.
Leading AI developers like OpenAI, Microsoft, Google, and Anthropic launched the Frontier Model Forum in July in response to these worries. The purpose of this forum is to supervise responsible AI creation’s self-regulation. Furthermore, in October, US President Joe Biden signed an executive order outlining new safety guidelines for high-level model creation and application. This action ensures that the development of AI models is transparent and safe, and it involves top AI developers like OpenAI.