A top artificial intelligence safety researcher is warning that the rapid advancement of AI technology could place humanity on a dangerous path if it is not properly understood and controlled.
Mrinank Sharma, the former lead AI safety researcher at Anthropic, resigned this week and wrote in a post on X that “the world is in peril.” He cautioned that the threat is “not just from AI or bioweapons, but from a whole series of interconnected crises” unfolding at the same time.
Today is my last day at Anthropic. I resigned.
— mrinank (@MrinankSharma) February 9, 2026
Here is the letter I shared with my colleagues, explaining my decision. pic.twitter.com/Qe4QyAFmxL
Sharma said he had “achieved what I wanted to here at Anthropic,” the company behind the Claude chatbot, and added that he felt “fortunate to contribute to early AI safety efforts at the company.”
His remarks have intensified concerns inside the tech industry about the pace and direction of artificial intelligence development.
Appearing on ABC News, Malo Bourgon, chief executive officer of the Machine Intelligence Research Institute, said many people have not been exposed to “how quickly and how better and better AI systems are getting.” He said the concern extends far beyond job displacement.
“We need to take a step back and think about the goal of these companies,” Bourgon said, noting that many leading firms aim to build superintelligence, defined as AI systems smarter than all of humanity combined.
Order Amanda Grace’s New Book, “Brace For Impact” on Amazon.com!
The core problem, he said, is that researchers do not fully understand how modern AI systems function.
“It’s more like growing AI systems than it is kind of like actually building them in a way that we understand,” Bourgon said.
That lack of understanding raises a troubling question: “How do we control something that is much smarter than us that we don’t understand that might not value and care about the same things that we care about?”
Bourgon warned that if control over such systems were lost, it “could literally result in human extinction.” His organization has spent decades studying the long term risks of artificial superintelligence.
Still, he acknowledged the promise AI holds. Intelligence, he said, is responsible for both the positive and negative achievements of humanity. If developed safely, it could usher in “an era of untold prosperity” and help solve many of the world’s most pressing problems.
The challenge lies in building systems that are reliable and safe while resisting intense competitive pressures. Companies and nations are racing to dominate the field, creating incentives to move faster rather than slow down.
“There’s a sense in which there’s a lot of benefits to chase. Everyone’s kind of racing, so the incentives are difficult,” Bourgon said.
He pointed to comments from AI leaders who have expressed a desire to proceed more cautiously but feel constrained by global competition.
Ultimately, Bourgon suggested that governments and the international community may need to coordinate efforts to slow the pace of development until researchers better understand what they are creating.
As AI systems grow more powerful, the warnings from within the industry itself are becoming harder to ignore. The question now is whether policymakers and tech leaders will act before the technology outpaces humanity’s ability to control it.
James Lasher, a seasoned writer and editor at Charisma Media, combines faith and storytelling with a background in journalism from Otterbein University and ministry experience in Guatemala and the LA Dream Center. A Marine Corps and Air Force veteran, he is the author of The Revelation of Jesus: A Common Man’s Commentary and a contributor to Charisma magazine. For interviews and media inquiries, please contact [email protected].











