Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

在上海浦东滨江公园观赏外滩建筑群-20240824在上海浦东滨江公园观赏外滩建筑群-20240824
0

Safe Superintelligence (SSI), the artificial intelligence startup co-founded by former OpenAI Chief Scientist Ilya Sutskever, has successfully raised over $1 billion in capital. The substantial investment comes from a host of prominent investors, including NFDG, an investment partnership run by Nat Friedman and SSI CEO Daniel Gross, a16z, Sequoia, DST Global, and SV Angel.

A New Chapter for AI Safety

The funding announcement marks a significant milestone for SSI, which has remained somewhat shrouded in mystery since its inception. According to a statement released by the company, the proceeds from the latest funding round will be used to acquire computing power and hire a team of researchers and engineers. The hiring will be split between offices in Palo Alto and Tel Aviv, reflecting the global nature of the company’s ambitions.

While SSI has not yet disclosed the specific areas of research it will pursue, the company’s focus on AI safety is well-established. Sutskever, who was previously at the helm of the now-dismantled Superalignment team at OpenAI, has long been a proponent of ensuring that AI systems are safe and beneficial to humanity.

A Promising Valuation

Reuters, citing a source familiar with the matter, reported that the new funding round values SSI at an impressive $5 billion. This valuation underscores the confidence investors have in the company’s vision and the potential impact of its work in the AI safety space.

A Turbulent Past

Sutskever’s journey to SSI has been marked by a series of high-profile events. Prior to his departure from OpenAI, Sutskever led the Superalignment team, which focused on general AI safety research. However, his time at OpenAI came to an abrupt end following a highly-publicized fallout with several former OpenAI board members and CEO Sam Altman.

The conflict, which centered around what Sutskever described as a breakdown in communications, led to his departure from the company. The incident was part of a larger turmoil that saw the firing of Sam Altman and a subsequent period of instability within OpenAI.

The Road Ahead

With the new funding in place, SSI is now poised to continue its work in AI safety. The company’s focus on ensuring that superintelligent AI systems are aligned with human values and goals is critical, given the rapid pace of AI development and the potential risks associated with uncontrolled AI growth.

The hiring of researchers and engineers in both Palo Alto and Tel Aviv suggests a commitment to leveraging diverse talent and perspectives in the pursuit of AI safety. This global approach is likely to enhance the company’s ability to tackle complex challenges in the field.

Conclusion

The $1 billion investment in Safe Superintelligence is a testament to the growing importance of AI safety research. As the AI landscape continues to evolve, companies like SSI are playing a crucial role in ensuring that the benefits of AI are realized while minimizing potential risks. With a strong team and substantial financial backing, SSI is well-positioned to make significant contributions to the field of AI safety in the years to come.


read more

Views: 0

0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注