economie

Another safety researcher is leaving OpenAI

  • Miles Brundage, who advises OpenAI leadership on safety and policy, announced his departure.
  • He said that he’s leaving the company to have more independence and freedom to publish.
  • The AGI Readiness team that he oversaw will be disbanded.

Miles Brundage, a senior policy advisor and head of the AGI Readiness team at OpenAI, is leaving the company. He announced the decision today in a post on X, which was accompanied by a Substack article explaining the decision. The AGI Readiness team that he oversaw will be disbanded, with its various members distributed among other parts of the company.

Brundage is just the latest high-profile safety researcher to leave OpenAI. In May, the company dissolved its Superalignment Team, which focused on the risks of artificial superintelligence, after the departure of its two leaders Jan Leike and Ilya Sutskever. The company has also seen the departure in recent months of Chief Technology Officer Mira Murati, Chief Research Officer Bob McGrew, and VP of Research Barret Zoph.

OpenAI did not respond to a request for comment.

For the past six years Brundage has advised the OpenAI executives and board members about how best to prepare for the rise of artificial intelligence that rivals that of humans and which many experts agree could fundamentally transform society.

He’s been responsible for some of OpenAI’s biggest innovations in safety research, including instituting external red teaming, which involves outside experts looking for potential problems in OpenAI products.

Brundage said that he’s leaving the company to have more independence and freedom to publish. He referenced disagreements he’s had with OpenAI about limitations on which research he was allowed to publish and said that “the constraints have become too much.”

He also said that working within OpenAI has biased his research and made it difficult to be impartial about the future of AI policy. In his post on X, Brundage referenced a prevailing sentiment within OpenAI that “speaking up has big costs and that only some people are able to do so.”

Read the original article on Business Insider

https://www.businessinsider.com/another-safety-researcher-is-leaving-openai-2024-10