Ex-OpenAI chief scientist raises $1B for startup with only 10 employees
Ilya Sutskever, co-founder and former chief scientist for OpenAI, has reportedly raised a billion dollars for his artificial intelligence startup against a perceived value of $5 billion.
The company, Safe Superintelligence Inc. (SSI), announced the funding in a Sep. 4 update to its website. Per SSI, it raised $1 billion from NFDG, a16z, Sequoia, DST Global, and SV Angel.
As Cointelegraph recently reported, Sutskever and engineer Dan Levy left OpenAI in June 2024, less than a year after Sutskever formed the firm’s “Superalignment” safety team . While Sutskever, at the time, said he was leaving to pursue other opportunities, other former employees have hinted at disagreements between CEO Sam Altman and other members of the firm and its board.
Related: Former OpenAI employee quit to avoid ‘working for the Titanic of AI’
Artificial intelligence safety
SSI appears to be a direct competitor to Sutskever’s former firm, OpenAI, with a slightly tweaked mission. SSI’s focus, according to its website, is to build an AI model with more intelligence than humans in a safe manner.
That may sound familiar because Anthropic, another company founded by former OpenAI employees, also states that its mission is to build safe AI models. In fact, according to OpenAI’s website, it too is solely focused on the safe development of AI for the benefit of humanity.
Where SSI diverges, so far, is in its claim that it will only ever develop one product: a safe superintelligence.
Super intelligence
OpenAI and Anthropic both provide products and services to enterprise customers and the general public. They each have flagship products, ChatGPT and Claude, respectively, which they offer on a limited basis at no cost to consumers with the option to subscribe for additional access.
However, according to the companies themselves, neither product would fit any definition of “superintelligence.” It bears mention that “superintelligence” is a term typically attributed to philosopher Nick Bostrom. In his 2014 book, “Superintelligence: Paths, Dangers, Strategies,” Bostrom discusses an AI system with cognitive capabilities that far exceed those of even the smartest humans. He predicted, at the time, that such an entity could arrive as soon as “the first third of the next century.”
There is no scientific definition or defined measurement for “superintelligence.” But, based on the context, it can be assume that SSI is committed to research and development until such time as it can debut an AI model that’s demonstrably more capable than the average human.
Both OpenAI and Anthropic initially claimed their only purpose was to build a human-level AI in service to humanity. But building the world’s most powerful AI systems costs billions of dollars using current methods and technologies.
Whether SSI, a company with only 10 employees, uses its $1 billion funding chest to build the world’s first superintelligence or ChatGPT’s next competitor in the chatbot arena remains to be seen.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
SEC Chair Gary Gensler Gives First Sign He May Resign – Has He Made His Final Announcement?
SEC Chairman Gary Gensler, who is disliked by the cryptocurrency world, gave the first signal that he may resign.
BREAKING: 18 States in the US Sue the SEC and Chair Gary Gensler for Cryptocurrency Actions
Donald Trump's Altcoin Project Partners With A New Altcoin!
World Liberty Financial (WLFI), the new decentralized finance (DeFi) protocol backed by Donald Trump and his family, has partnered with Chainlink (LINK).
Franklin Templeton, Who Manages $1.5 Trillion, Will Also Make His Cryptocurrency Fund Available on a Giant Altcoin Network!
Franklin Templeton U.S. Government OnChain has extended the U.S. Government Money Market Fund (FOBXX) to the Ethereum blockchain.