OpenAI co-founder Ilya Sutskever launches new firm focused on developing 'a safe super AI’

3 months ago 20

Ilya Sutskever, who was OpenAI’s chief scientist, teamed up with former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy to establish SSI or Safe Superintelligence Inc read more

OpenAI co-founder Ilya Sutskever launches new firm focused on developing 'a safe super AI’

Sutskever left OpenAI in May following a significant disagreement with the company's leadership over how to handle AI safety. Image Credit: AFP, Reuters

Ilya Sutskever, one of the co-founders of OpenAI, has started a new company called Safe Superintelligence Inc. (SSI), just one month after officially leaving OpenAI.

Sutskever, who was OpenAI’s chief scientist, teamed up with former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy to establish SSI.

At OpenAI, Sutskever played a crucial role in the company’s efforts to enhance AI safety, especially with the advent of “superintelligent” AI systems. He worked closely with Jan Leike, who co-led OpenAI’s Superalignment team.

However, both Sutskever and Leike left OpenAI in May following a significant disagreement with the company’s leadership over how to handle AI safety. Leike has since joined Anthropic, another AI-focused company, where he now leads a team.

Sutskever has long been an advocate for addressing the challenging aspects of AI safety. In a 2023 blog post co-written with Leike, he predicted that AI with intelligence surpassing humans could emerge within the next decade. He emphasized that such AI might not necessarily be friendly and stressed the need for research into methods to control and restrict it.

His commitment to AI safety remains strong. On Wednesday, Sutskever announced the formation of his new company, SSI, via a tweet. Sutskever stated, that SSI is a mission, and is the basis on which his new organisation will work. He further added that the SSI team, its investors, and its business model, all, are aligned to achieve safe superintelligence.

Sutskever further added, that they approach safety and capabilities in tandem, and that technical problems should be solved through revolutionary engineering and scientific breakthroughs.

He added, “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.” The singular focus means no distraction by management overhead or product cycles. Their business model prioritises safety, security, and progress over short-term commercial pressures, he added.

In an interview with Bloomberg, Sutskever discussed the new company in more detail but did not disclose its funding status or valuation.

Unlike OpenAI, which initially launched as a non-profit in 2015 and later restructured to accommodate the immense funding required for its computing power, SSI is being designed as a for-profit entity from the start. Given the current interest in AI and the team’s impressive credentials, SSI may soon attract significant capital. Daniel Gross told Bloomberg, “Out of all the problems we face, raising capital is not going to be one of them.”

SSI has established offices in Palo Alto and Tel Aviv and is actively recruiting technical talent.

The company aims to push the boundaries of AI capabilities while ensuring that safety measures are always a step ahead. With a focus on revolutionary engineering and scientific breakthroughs, SSI is poised to make significant contributions to the field of AI safety.

The formation of SSI highlights the ongoing debate and concern within the AI community about the potential risks associated with superintelligent AI.

Sutskever’s new venture aims to address these risks head-on, ensuring that the development of AI technologies is both safe and beneficial for society.

Read Entire Article