Who is Ilya Sutskever, the man at the center of OpenAI’s leadership shakeup—and why is he so worried about AI superintelligence going rogue? 

Date:

Share:

[ad_1]

As speculation swirls around the leadership shakeup at OpenAI announced Friday, more attention is turning to a man at center of it all: Ilya Sutskever. The company’s chief scientist, Sutskever also serves on the OpenAI board that ousted CEO Sam Altman on Friday, claiming somewhat cryptically that Altman had not been “consistently candid” with it.

Last month, Sutskever, who generally shies away from the media spotlight, sat down with MIT Technology Review for a long interview. The Israeli-Canadian told the magazine that his new focus was on how to prevent an artificial superintelligence—which can outmatch humans but as far as we know doesn’t yet exist—from going rogue.

Sutskever was born in Soviet Russia but raised in Jerusalem from the age of five. He then studied at the University of Toronto with Geoffrey Hinton, a pioneer in artificial intelligence sometimes called the “godfather of AI.” 

Earlier this year, Hinton left Google and warned that AI companies were racing toward danger by aggressively creating generative-AI tools like OpenAI’s ChatGPT. “It is hard to see how you can prevent the bad actors from using it for bad things,” he told the New York Times.

Hinton and two of his graduate students—one of them being Sutskever—developed a neural network in 2021 that they trained to identify objects in photos. Called AlexNet, the project showed that neural networks were much better pattern recognition than had been generally realized. 

Impressed, Google bought Hinton’s spin-off DNNresearch—and hired Sutskever. While at the tech giant, Sutskever helped show that the same kind of pattern recognition displayed by AlexNet for images could also work for words and sentences.

But Sutskever soon came to the attention of another power player in artificial intelligence: Tesla CEO Elon Musk. The mercurial billionaire had long warned of the potential dangers AI poses to humanity. Years ago he grew alarmed by Google cofounder Larry Page not caring about AI safety, he told the Lex Fridman Podcast this month, and by the concentration of AI talent at Google, especially after it acquired DeepMind in 2014. 

At Musk’s urging, Sutskever left Google in 2015 to become a cofounder and chief scientist at OpenAI, then a nonprofit that Musk envisioned being a counterweight to Google in the AI space. (Musk later fell out with OpenAI, which decided against being a nonprofit and took billions in investment from Microsoft, and now has a ChapGPT competitor called Grok.)

“That was one of the toughest recruiting battles I’ve ever had, but that was really the linchpin for OpenAI being successful,” Musk said, adding that Sutskever, in addition to being smart, was a “good human” with a “good heart.” 

At OpenAI, Sutskever played a key role in developing large language models, including GPT-2, GPT-3, and the text-to-image model DALL-E. 

Then came the release of ChatGPT late last year, which gained 100 million users in under two months and set off the current AI boom. Sutskever told Technology Review that the AI chatbot gave people a glimpse of what was possible, even if it later disappointed them by returning incorrect results. (Lawyers embarrassed after trusting ChatGPT too much are among the disappointed.)

But more recently Sutskever’s focus has been on the potential perils of AI, particularly once AI superintelligence that can outmatch humans arrive, which he believes could happen within 10 years. (He distinguishes it from artificial general intelligence, or AGI, which can merely match humans.)

Central to the leadership shakeup at OpenAI on Friday was the issue of AI safety, according to anonymous sources who spoke to Bloomberg, with Sutskever disagreeing with Altman on how quickly to commercialize generative AI products and the steps needed to reduce potential public harm.

“It’s obviously important that any superintelligence anyone builds does not go rogue,” Sutskever told Technology Review.

With that in mind, his thoughts have turned to alignment—steering AI systems to people’s intended goals or ethical principles rather than it pursuing unintended objectives—but as it might apply to AI superintelligence. 

In July, Sutskever and colleague Jan Leike wrote an OpenAI announcement about a project on superintelligence alignment, or “superalignment.” They warned that while superintelligence could help “solve many of the world’s most important problems,” it could also “be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.



[ad_2]

Source link

Subscribe to our magazine

━ more like this

Asia’s Certified Cryptocurrency Investigator Launches in Singapore: Pioneering Crypto Crime Investigation (C.C.I)

Singapore, – In a groundbreaking move to enhance digital asset security and bolster consumer confidence in the cryptocurrency market, the Crypto Crime  Investigation...

C.C.I Launches as the Ultimate Recovery Platform for Crypto Investors Targeted by Scams

Nevada, Florida – In response to the growing concern over cryptocurrency investment scams, C.C.I (Crypto Crime Investigation) proudly announces its official launch as the...

Here’s what we know about the suspect in the latest Trump assassination attempt

Local authorities said the U.S. Secret Service agents protecting Trump fired at a man pointing an AK-style rifle with a scope as Trump...

Buckle up for this 36-hour span that will soon take markets on a rollercoaster ride

The world economy’s tectonic plates will shift this week when a US easing cycle begins, just as officials from Europe to Asia set...

Another assassination attempt on Trump jolts topsy-turvy presidential campaign

Just two months after a shooter nearly killed Donald Trump during a rally in Pennsylvania, a second assassination attempt Sunday on the former...