Safe Superintelligence Raises $1 Billion in Funding, Led by Ex-OpenAI Chief Scientist Ilya Sutskever
The post Safe Superintelligence Raises $1 Billion in Funding, Led by Ex-OpenAI Chief Scientist Ilya Sutskever appeared on BitcoinEthereumNews.com.
In a significant development within the AI sector, Ilya Sutskever has announced the creation of Safe Superintelligence (SSI), which has successfully raised $1 billion in its initial funding round. The funding, which valued SSI at $5 billion, showcases substantial interest from top-tier investors like a16z and Sequoia, reflecting confidence in Sutskever’s vision for AI safety and development. Sutskever’s declaration on social media, “Mountain: identified,” emphasizes his commitment to tackling the complex challenges posed by advanced AI systems. This article discusses the recent launch of Safe Superintelligence and examines the implications of its substantial funding in the AI landscape. Safe Superintelligence Secures $1 Billion Funding On Wednesday, Ilya Sutskever, who previously served as chief scientist at OpenAI, made headlines by announcing that his new company, Safe Superintelligence (SSI), has successfully secured $1 billion from a group of distinguished investors, including NFDG, a16z, Sequoia, DST Global, and SV Angel. This funding marks an ambitious start for the company, establishing a valuation of $5 billion at such an early stage. SSI signifies Sutskever’s new direction in focusing on building AI systems designed with enhanced safety measures. Background: The Transition from OpenAI Prior to his current undertaking, Sutskever witnessed a tumultuous period at OpenAI, which culminated in his resignation alongside his colleague Jan Leike. Their departures followed a series of controversies surrounding the leadership of co-founder Sam Altman, emphasizing a critical need for prioritizing AI safety. Leike articulated his concerns, stating on Twitter that the urgency to manage advanced AI technologies is more pressing than ever, underscoring the motivation behind the inception of SSI. Strategic Vision of Safe Superintelligence Safe Superintelligence is not only a name but also a mission that encapsulates the core objective of the company: to ensure that future AI systems are developed with a robust framework for safety. Sutskever leads…
Filed under: News - @ September 4, 2024 11:14 pm