Sutskever strikes AI gold with billion-dollar backing for superintelligent AI

On Wednesday, Reuters reported that Safe Superintelligence (SSI), a new AI startup cofounded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion in funding. The 3-month-old company plans to focus on developing what it calls "safe" AI systems that surpass human capabilities.

Further ReadingEx-OpenAI star Sutskever shoots for superintelligent AI with new company

The fundraising effort shows that even amid growing skepticism around massive investments in AI tech that so far have failed to be profitable, some backers are still willing to place large bets on high-profile talent in foundational AI research. Venture capital firms like Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel participated in the SSI funding round.

SSI aims to use the new funds for computing power and attracting talent. With only 10 employees at the moment, the company intends to build a larger team of researchers across locations in Palo Alto, California, and Tel Aviv, Reuters reported.

While SSI did not officially disclose its valuation, sources told Reuters it was valued at $5 billion—which is a stunningly large amount just three months after the company's founding and with no publicly-known products yet developed.

Son of OpenAIEnlarge / OpenAI Chief Scientist Illya Sutskever speaks at TED AI 2023.Benj Edwards

Much like Anthropic before it, SSI formed as a breakaway company founded in part by former OpenAI employees. Sutskever, 37, cofounded SSI with Daniel Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher.

Further ReadingFrom sci-fi to state law: California’s plan to prevent AI catastrophe

Sutskever's departure from OpenAI followed a rough period at the company that reportedly included disenchantment that OpenAI management did not devote proper resources to his "superalignment" research team and then Sutskever's involvement in the brief ouster of OpenAI CEO Sam Altman last November. After leaving OpenAI in May, Sutskever said his new company would "pursue safe superintelligence in a straight shot, with one focus, one goal, and one product."

Superintelligence, as we've noted previously, is a nebulous term for a hypothetical technology that would far surpass human intelligence. There is no guarantee that Sutskever will succeed in his mission (and skeptics abound), but the star power he gained from his academic bona fides and being a key cofounder of OpenAI has made rapid fundraising for his new company relatively easy.

The company plans to spend a couple of years on research and development before bringing a product to market, and its self-proclaimed focus on "AI safety" stems from the belief that powerful AI systems that can cause existential risks to humanity are on the horizon.

The "AI safety" topic has sparked debate within the tech industry, with companies and AI experts taking different stances on proposed safety regulations, including California's controversial SB-1047, which may soon become law. Since the topic of existential risk from AI is still hypothetical and frequently guided by personal opinion rather than science, that particular controversy is unlikely to die down anytime soon.



Sutskever strikes AI gold with billion-dollar backing for superintelligent AI

Sutskever strikes AI gold with billion-dollar backing for superintelligent AI

Sutskever strikes AI gold with billion-dollar backing for superintelligent AI

Sutskever strikes AI gold with billion-dollar backing for superintelligent AI
Sutskever strikes AI gold with billion-dollar backing for superintelligent AI
Ads Links by Easy Branches
Play online games for free at games.easybranches.com
Guest Post Services www.easybranches.com/contribute