The most important cause that is worth fighting for, as artificial intelligence advances, is human intelligence.
It will not matter in a hundred years, what AI model was used at war or not, or which one was safe or not, what will matter is that as AI showed exceptional capabilities, did any company use the opportunity to pursue ways to make human intelligence better, so that important problem-solving improves, on average, everywhere, independent of AI, for humanity?
While several AI companies, including Anthropic, are competitive, they are not indispensable. They have not been able to work on solutions or monopolistic products that can address national security necessities, defense superiority, or even peacekeeping in restive zones.
Anthropic
Anthropic does not have any solution to food security and agriculture in some countries where the U.S. has interests, for example, in Venezuela. Anthropic does not have energy solutions, for equity, for the people, in a place like Iran. Anthropic has no solution to mental health in the United States, the drug addiction problem, border safety, national debt reduction, and much more.
Anthropic has a powerful AI, but Anthropic too has been unable to use enough human intelligence to push AI to general utility that would make the Pentagon, to be at least deferential. There are other industry-wide problems as well, like deepfakes, and AI use for cheating, AI psychosis, and more, that Anthropic has no offer or solution for.
Whatever the goals of Anthropic are, say super AI safety, it holds little promise for humanity if human intelligence stays stagnant. AI is powerful, yes. AI risks are countless, yes, but the ability for humans to survive is better if human intelligence is aided, not abandoned. There are predictions that jobs will be wiped out, entry-level work will be going away, wages and benefits will be cut, and much else.
There are no major economic models to prepare humanity yet, other than the vague universal basic income. So, even with all the posturing of being on the side of AI safety and humanity, Anthropic is not that strong, yet, or generally useful, and not even doing as much for humanity through human intelligence.
While Claude Code is excellent, the United States military will do what it has to do, especially when it is with AI, a technology with some state-of-the-art open-source as well as competitors willing to replace it.
Anthropic should never have — according to their terms — allowed the military to [officially] use Claude. It is either allowed or not. There should not be any expectations that there would be restrictions, especially as the rules of engagement in combat may change. Maybe the whole drama is Anthropic trying to do damage control after news leaked that Claude AI was used in the Venezuelan operation to depose Maduro. Maybe.
Already, Anthropic’s AI is used in the Iranian war. So, with all the guardrails and pledges, it is already getting used in deviation from the original mission. Seeing this, what is the contribution of Anthropic to peace in the Middle East?
Is it not possible to use Claude to write thousands of peace-inclined messages, as cognitive restructuring, against hardline and extreme ideologies, then have organizations broadcast them digitally, to at least have some effect? Clearly, OpenAI or other AI companies are not doing this, but Anthropic wanted to be different and should have been.
Human Intelligence
No AI company on earth, including Anthropic, has a human intelligence research lab. No university has the same. Just directly for human intelligence. Anthropic cannot dedicate some resources to getting to the question, seeking the components of human intelligence in the brain, their relays, mechanisms, and everything else, coming up with a model that can also shape nosology, and prepare humanity for the future.
Anthropic is basking in small balls like Claude constitution, AI Welfare, and several other errant solutions, without doing what matters.
Maybe they mean well. But it does not imply that they are actually different, considerate to humanity, or that they themselves are actually unique.
As some people take sides, human intelligence remains in drought, starving away, collapsing, with no definition, no lab, no research, nothing. But all we have is noise over Anthropic’s rectitude.
There is a recent [February 28, 2026] story on CBS, Anthropic CEO says he’s sticking to AI “red lines” despite clash with Pentagon, stating that: “Hours after a bitter feud between the Pentagon and Anthropic ended with the Trump administration cutting off the artificial intelligence startup, Anthropic CEO Dario Amodei told CBS News in an exclusive interview Friday night he wants to work with the military — but only if it addresses the firm’s concerns.”
“The conflict centers on Anthropic’s push for guardrails that explicitly prevent the military from using its powerful Claude AI model to conduct mass surveillance on Americans or to power autonomous weapons. The Pentagon wants the ability to use Claude for “all lawful purposes,” and says it isn’t interested in either of the uses that Anthropic was concerned about.”
There is a new [March 1, 2026] report on WSJ, U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban, stating that:
“Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools. Commands around the world, including U.S. Central Command in the Middle East, use Anthropic’s Claude AI tool, people familiar with the matter confirmed. Centcom declined to comment about specific systems being used in its ongoing operation against Iran.”
This article was written for WHN by David Stephen, who currently does research in conceptual brain science with a focus on the electrical and chemical signals for how they mechanize the human mind, with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence, and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana-Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.
As with anything you read on the internet, this article should not be construed as medical advice; please talk to your doctor or primary care provider before changing your wellness routine. WHN neither agrees nor disagrees with any of the materials posted. This article is not intended to provide a medical diagnosis, recommendation, treatment, or endorsement.
Opinion Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy of WHN. Any content provided by guest authors is of their own opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything else. These statements have not been evaluated by the Food and Drug Administration.