OpenAI announced on Tuesday the formation of a new committee that will provide recommendations to the company's board regarding safety and security. This decision comes shortly after the disbandment of a team dedicated to AI safety.
In a blog post, OpenAI revealed that the new committee will be overseen by CEO Sam Altman, along with Bret Taylor, the company's board chair, and board member Nicole Seligman.
Following the recent departure of OpenAI executive Jan Leike, who criticized the company for not prioritizing AI safety work, there has been another significant exit. Ilya Sutskever, a leader of OpenAI's "superalignment" team focusing on aligning AI development with human needs, has also left. Sutskever was involved in the CEO's unexpected removal last year, but later supported the CEO's return.
Earlier this month, OpenAI announced to CNN that they are disbanding the superalignment team and reallocating the employees to different roles within the company to enhance their superalignment goals.
In a recent blog post on Tuesday, OpenAI also mentioned that they have started training a new AI model to replace the current one used in ChatGPT. The company stated that this new AI model, which will be replacing GPT-4, is a step closer towards achieving artificial general intelligence.
"We take pride in creating and introducing models that are top-notch in terms of capabilities and safety," the company stated.
The Safety and Security Committee's initial task will involve assessing and enhancing OpenAI's procedures and protections over the next three months, as mentioned in the blog post. After the 90-day period, the committee will present their suggestions to the entire Board for review. Following the Board's assessment, OpenAI will provide a public update on the implemented recommendations in a manner that prioritizes safety and security.
Editor's P/S:
The recent developments at OpenAI underscore the ongoing tension between AI advancement and the imperative for safety and security. The disbandment of the AI safety team and the departure of key executives highlight concerns over the company's priorities. The formation of a Safety and Security Committee is a positive step, but it remains to be seen whether it can effectively address the concerns raised.
OpenAI's ongoing work to develop a new AI model that could surpass GPT-4 raises both excitement and apprehension. While the potential for progress towards artificial general intelligence is tantalizing, it also raises questions about the potential risks and the need for robust safety measures. The company's commitment to prioritize safety and security is crucial, and it will be important for them to transparently communicate their progress and findings to the public.