Tech

Former OpenAI Chief Scientist Announces New Company

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


Ilya Sutskever, co-founder and former chief scientist at OpenAI, announced on Wednesday that he is launching a new venture called Safe Superintelligence Inc. Sutskever said on X that the new lab will focus exclusively on building a safe “superintelligence” — an industry term for a hypothetical system that is smarter than humans.

Sutskever is joined at Safe SuperIntelligence Inc. by co-founders Daniel Gross, an investor and engineer who worked in AI at Apple until 2017, and Daniel Levy, another former OpenAI employee. O new The U.S.-based company will have offices in Palo Alto, California, and Tel Aviv, according to a description shared by Sutskever.

Sutskever was one of the founding members of OpenAI and was chief scientist during the company’s meteoric rise following the launch of ChatGPT. In November, Sutskever participated in the infamous attempt to oust OpenAI CEO Sam Altman, only to later make him change his mind and support Altman’s return. When Sutskever announced his resignation in May, he he said he was “confident that OpenAI will build an AGI that is safe and beneficial” under Altman’s leadership.

Safe Superintelligence Inc. says it will aim to launch just one product: the system in its name. This model will insulate the company from commercial pressures, its founders wrote. However, it is currently unclear who will finance the development of the new venture or what exactly its business model will be.

“Our singular focus means there is no distraction from management overhead or product cycles,” the announcement says, perhaps subtly taking aim at OpenAI. In May, another OpenAI senior member, Jan Leike, who co-led a security team with Sutskever, accused the company prioritizing “brilliant products” over safety. Leike’s accusations emerged at the time when six other employees concerned about safety left the company. Altman and OpenAI President Greg Brockman he responded to Leike’s accusations, acknowledging that there was more work to be done, saying “we take our role here very seriously and carefully evaluate feedback on our actions.”

See more information: A timeline of all recent accusations made against OpenAI and Sam Altman

In an interview with Bloomberg, Sutskever elaborated on Safe Superintelligence Inc.’s approach, saying, “By safe, we mean safe as in nuclear safety as opposed to safe as in ‘trust and security’”; one of OpenAIThe company’s core security principles are “pioneering trust and security.”

While many details about the new company have yet to be revealed, its founders have a message for those intrigued in the industry: they are hiring.



This story originally appeared on Time.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 6,118

Don't Miss

Edin Terzic resigned as coach of Champions League runners-up Borussia Dortmund

Edin Terzic resigned as coach of Champions League runners-up Borussia

TikTok sues US government over ban

TikTok is suing the US government over the new law