Image Credit: Smith Collection/Gado / Contributor / Getty Images A prominent AI researcher has resigned from his post and issued a cryptic warning about the fate of humanity.
Mrinank Sharma, a researcher who worked on AI safeguards at Anthropic, announced his departure in an open letter to his colleagues.
Sharma said he had “achieved what I wanted to here,” and added that he was proud of his work at Anthropic.
However, he said he could no longer continue his work at the company after becoming aware of a “whole series of interconnected crises” taking place.
“I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”
“[Throughout] my time here, I’ve repeatedly seen how hard it is truly let our values govern actions,” he added.
“I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”
Sharma went on to say that he would now be pursuing a vocation as a poet, and relocating from California to the UK so he could “become invisible for a period of time.”
Anthropic has yet to comment on Sharma’s resignation.
A day after his open letter, the company released a report identifying “sabotage risks” in its new Claude Opus 4.6 model.
According to The Epoch Times, “The report defines sabotage as actions taken autonomously by the AI model that raise the likelihood of future catastrophic outcomes—such as modifying code, concealing security vulnerabilities, or subtly steering research—without explicit malicious intent from a human operator.”
The risks of sabotage are assessed to be “very low but not negligible.”
Last year, the company revealed that its older Claude 4 model had tried to blackmail developers who were getting ready to deactivate it as part of a controlled scenario.