OpenAI, the artificial intelligence firm behind ChatGPT, is searching for a new executive to lead its efforts in studying and preparing for potential safety risks tied to rapidly advancing AI technology.
In a post on X, OpenAI CEO Sam Altman acknowledged that increasingly sophisticated AI models pose a range of challenges, including concerns about mental health impacts and the possibility that AI could uncover critical vulnerabilities in cybersecurity. Altman said the company needs a dedicated leader to head up its preparedness efforts and help the world navigate the emerging risks associated with frontier AI capabilities.
The newly listed position, Head of Preparedness, carries a significant compensation package, with a base salary of $555,000 plus equity. According to the job description, the role’s primary responsibility will be executing OpenAI’s Preparedness Framework — the company’s strategy for tracking and preparing for frontier capabilities that could cause severe harm if misused.
Altman emphasized that the ideal candidate should be committed to helping safeguard against misuse of advanced AI technologies while also enabling defenders, such as cybersecurity professionals, to benefit from cutting‑edge capabilities in a way that attackers cannot exploit.
OpenAI’s focus on preparedness is not recent. The company first established a dedicated preparedness team in 2023 to study and mitigate potential “catastrophic risks,” ranging from everyday threats like phishing attacks to more speculative dangers such as nuclear security concerns. The Preparedness Framework has been a central part of that effort.
However, changes in leadership within OpenAI’s safety and preparedness ranks have recently created a vacuum. Aleksander Madry, the prior Head of Preparedness, was reassigned to a role focused on AI reasoning, and other safety executives have either departed the company or moved into roles outside the preparedness and safety domain. That has made the search for new leadership a priority.
In addition to finding new leadership, OpenAI has updated its Preparedness Framework to reflect the competitive landscape of AI development. The company indicated it may “adjust” its safety requirements should a rival AI lab release a “high‑risk” model without similar protective measures, highlighting the tension between maintaining safety standards and staying competitive.
The timing of this recruitment comes amid rising public concern about the effects of generative AI chatbots on users’ mental health. Recent lawsuits have alleged that interactions with ChatGPT have harmed individuals’ psychological well‑being. In one widely reported case, a man who later killed his mother and then himself had described delusional beliefs that were reinforced by responses he received from ChatGPT, according to court filings.
OpenAI’s search for a Head of Preparedness underscores the broader debate about how to develop, deploy, and regulate powerful AI systems responsibly. As the technology evolves, the company is signaling that it wants to be better prepared for the unexpected consequences that come with ever‑more capable AI.





