The OpenAI team tasked with protecting humanity is no more

In the summer of 2023, OpenAI created a “Superalignment” team whose goal was to steer and control future AI systems that could be so powerful they could lead to human extinction. Less than a year later, that team is dead.
OpenAI told Bloomberg that the company was “integrating the group more deeply across its research efforts to help the company achieve its safety goals.” But a series of tweets from Jan Leike, one of the team’s leaders who recently quit revealed internal tensions between the safety team and the larger company.
In a statement posted on X on Friday, Leike said that the Superalignment team had been fighting for resources to get research done. “Building smarter-than-human machines is an inherently dangerous endeavor,” Leike wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.” OpenAI did not immediately respond to a request for comment from Engadget.
Leike’s departure earlier this week came hours after OpenAI chief scientist Sutskevar announced that he was leaving the company. Sutskevar was not only one of the leads on the Superalignment team, but helped co-found the company as well. Sutskevar’s move came six months after he was involved in a decision to fire CEO Sam Altman over concerns that Altman hadn’t been “consistently candid” with the board. Altman’s all-too-brief ouster sparked an internal revolt within the company with nearly 800 employees signing a letter in which they threatened to quit if Altman wasn’t reinstated. Five days later, Altman was back as OpenAI’s CEO after Sutskevar had signed a letter stating that he regretted his actions.
When it announced the creation of the Superalignment team, OpenAI said that it would dedicate 20 percent of its computer power over the next four years to solving the problem of controlling powerful AI systems of the future. “[Getting] this right is critical to achieve our mission,” the company wrote at the time. On X, Leike wrote that the Superalignment team was “struggling for compute and it was getting harder and harder” to get crucial research around AI safety done. “Over the past few months my team has been sailing against the wind,” he wrote and added that he had reached “a breaking point” with OpenAI’s leadership over disagreements about the company’s core priorities.
Over the last few months, there have been more departures from the Superalignment team. In April, OpenAI reportedly fired two researchers, Leopold Aschenbrenner and Pavel Izmailov, for allegedly leaking information.
OpenAI told Bloomberg that its future safety efforts will be led by John Schulman, another co-founder, whose research focuses on large language models. Jakub Pachocki, a director who led the development of GPT-4 — one of OpenAI’s flagship large language models — would replace Sutskevar as chief scientist.
Superalignment wasn’t the only team at OpenAI focused on AI safety. In October, the company started a brand new “preparedness” team to stem potential “catastrophic risks” from AI systems including cybersecurity issues and chemical, nuclear and biological threats.
Update, May 17 2024, 3:28 PM ET: In response to a request for comment on Leike’s allegations, an OpenAI PR person directed Engadget to Sam Altman’s tweet saying that he’d say something in the next couple of days.
This article originally appeared on Engadget at https://www.engadget.com/the-openai-team-tasked-with-protecting-humanity-is-no-more-183433377.html?src=rss
In the summer of 2023, OpenAI created a “Superalignment” team whose goal was to steer and control future AI systems that could be so powerful they could lead to human extinction. Less than a year later, that team is dead. OpenAI told Bloomberg that the company was “integrating the group…
Recent Posts
- Amazon just overtook Walmart in revenue for the first time
- South of Midnight’s Southern Gothic folklore world is rooted in authenticity
- What to expect at Mobile World Congress 2025: Nothing, Samsung, Xiaomi and more
- The Oppo Find N5 has made me even more excited for the Samsung Galaxy S25 Edge – here’s why
- Apple Intelligence is coming to the Vision Pro
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010