OpenAI adds former NSA chief Gen. Paul Nakasone to board of directors, Safety and Security Committee

3 months ago 41

Gen. Nakasone’s unparalleled experience in cybersecurity will be crucial in guiding OpenAI to achieve its mission of ensuring artificial general intelligence (AGI) benefits all of humanity, believes Bret Taylor, OpenAI board chair read more

OpenAI adds former NSA chief Gen. Paul Nakasone to board of directors, Safety and Security Committee

Nakasone, who has extensive experience in cybersecurity and national security from his time leading the military's Cyber Command in addition to his tenure at the NSA, will bring a wealth of expertise to OpenAI’s governance. Image Credit: AFP, AFP

OpenAI announced on Thursday that it is adding former NSA head and retired General Paul Nakasone to its board of directors as well as its newly formed Safety and Security Committee.

This strategic move aims to address growing concerns and convince sceptics that the company is committed to ensuring the safety and security of its AI models, especially as it pursues its ambitious goal of developing superintelligence.

Nakasone, who has extensive experience in cybersecurity and national security from his time leading the military’s Cyber Command in addition to his tenure at the NSA, will bring a wealth of expertise to OpenAI’s governance.

“Artificial Intelligence has the potential to have huge positive impacts on people’s lives, but it can only meet this potential if these innovations are securely built and deployed,” said Bret Taylor, OpenAI board chair, in a statement.

He emphasized that Nakasone’s unparalleled experience in cybersecurity will be crucial in guiding OpenAI to achieve its mission of ensuring artificial general intelligence (AGI) benefits all of humanity.

Senator Mark Warner (D-Va.), who chairs the Senate Intelligence Committee, praised Nakasone’s appointment, calling it a “huge get” for OpenAI. Warner noted that Nakasone had many options after leaving government service earlier this year, and his decision to join OpenAI underscores his respect and recognition in the security community.

Warner highlighted Nakasone’s expertise in cybersecurity, election security, and his realistic view of the challenges posed by China in the tech and security sectors.

On the other hand, OpenAI has faced criticism from former high-ranking employees who argue that the company has been prioritizing speed over safety in its AI development.

Jan Leike, who helped lead OpenAI’s long-term safety efforts under the project “superalignment,” left the company last month. In a thread announcing his departure, Leike criticized OpenAI for not adequately supporting the superalignment team’s work and expressed concerns about the company’s rapid pace potentially compromising safety.

Policy researcher Gretchen Krueger also departed from OpenAI last month, echoing some of Leike’s concerns and adding her own.

She voiced worries about the company’s commitment to ethical and secure AI development, suggesting that more robust support for safety initiatives is necessary.

The formation of the Safety and Security Committee and the inclusion of Nakasone are seen as steps towards addressing these criticisms and reinforcing OpenAI’s commitment to secure AI innovation. By integrating seasoned experts in cybersecurity into its leadership, OpenAI should be able to bolster its strategies for safe AI deployment and mitigate potential risks associated with the development of powerful AI technologies.

Read Entire Article