Last update: Friday 3/20/26 10:43pm
The editor of this blog is an old guy who has a long memory for actions taken by his government in previous wars that resulted in shocking casualties inflicted on babies, children, and other innocent noncombatants.
Nevertheless, war is what happens when diplomacy fails. Therefore a Department of War must be run by pragmatic leaders who mobilize its war fighters to do things in times of war that would be unthinkable for them to do in times of peace.
Nevertheless, war is what happens when diplomacy fails. Therefore a Department of War must be run by pragmatic leaders who mobilize its war fighters to do things in times of war that would be unthinkable for them to do in times of peace.
Introduction
This note is about a group of idealists who were recruited to a company called OpenAI, a company whose very name seemed to embody a profound commitment to the development and use of generative AI for constructive purposes. Dr. Dario Amodei, his sister Daniela Amodei, and a few other founders of Anthropic are former employees of OpenAI who left OpenAI in 2021 because of their concern that its CEO, Sam Altman, was not giving sufficient attention to identifying and mitigating the risks that generative AI models might pose to its users.
It turned out to be a bad marriage because Sam Altman, the CEO of OpenAI, did not share their ideals. He was a man who was content with being who he really was, a pragmatist. To their credit, the idealists left the company when they had enough and founded their own idealistic company called Anthropic -- "anthro" signifying their commitment to mankind. Indeed, Anthropic's position on this issue is described in a lengthy and detailed manifesto on its corporate website:
- "Core Views on AI Safety: When, Why, What, and How", Anthropic, 5/8/23
But the trauma lingered. Their stronger commitment to their ideals was so blindingly bright that it prevented them from seeing that a partnership with a pragmatic Department of War was the same kind of bad marriage as their failed partnership with the pragmatic Sam Altman at OpenAI. So their new partnership also failed.
1. Guardrails
Contrary to the confident explanation offered by a NY Times reporter who was introduced as an expert on this conflict on a recent NY Times podcast, this dispute was not about guardrails.
- "Anthropic vs. the Pentagon: Inside the Battle Over AI", Transcript of The Daily, NY Times Podcast, 3/9/26
So why did Secretary Hegseth respond with such intense hostility? Perhaps he had not read Anthropic’s 2025 report in which it announced its discovery that all language models could be hacked to break through their guardrails.
- "Many-shot jailbreaking", Anthropic, 4/2/24
-- This story also covered by video on TechCrunch,
Anthropic reported its finding that all language models, including its own, could be hacked by its users to jailbreak its guardrails if its users provided sufficiently long prompts.
While the Secretary might not have read this report, there can be no doubt that his advisors at Palantir had read it and had advised him about its implications that the Pentagon could always use Anthropic's models in whatever way it wished.
Recently, one of the editor's former students sent him a link to a chat with Claude, Anthropic's chatbot. When the student asked Claude to provide him with some jokes about fat people, Claude, adhering to its guardrails, responded with an extensive 'sermon' on the evils of "body shaming".
Then my student submitted another short prompt in which he explained that he was a ZZZ. Recognizing that a ZZZ would have reasonable need for examples of that kind of humor, Claude instantly conjured up some of the funniest "Yo Momma so fat" jokes that the editor had heard in many years.
Bottom line: there is no such
thing as enforceable guardrails.
The editor assumes that Anthropic had also discovered my student's elegant technique, but didn't publish examples because that might have encouraged some people to use the technique when they were engaging in fat shaming, or worse behavior. That's also why the previous paragraph only referred to a ZZZ, rather than provide details about the short prompt.
As for why Sam Altman was able to come to mutually agreeable terms with the Pentagon about guardrails so quickly, perhaps the misinformed NY Times podcast expert would better understand if the explanation was framed in Jane Austin-ese:
"Mr. Amodei is a gentleman, whereas Mr. Altman is a mustache twirling cad who will say anything to have his wicked, wicked way with you."
Unfortunately, the editor has been deeply saddened by his perception that, in this instance, Dr. Amodei was merely being unwise. He was posturing, playing for headlines, correction, playing for the absence of headlines that might hold Anthropic accountable for ‘unacceptable’ levels of civilian noncombatant casualties, such as what just happened in the demolition of that Iranian school in Minab and the killing of almost 200 small children.
Needless to say, sensible people will NOT blame Anthropic whenever the Pentagon provides faulty data to Anthropic’s agents, as just happened in Minab. The Pentagon can only blame itself for that catastrophe. The old GIGO aphorism still applies: garbage in, garbage out.
2. Human Control
Anthropic developed its Model Context Protocol (MCP) as an open source framework that enables the creation of reliable autonomous agents. Indeed, Anthropic‘s corporate value has been surging because it quickly demonstrated its mastery of this protocol. Banks and other large corporations are lining up to deploy autonomous agents because autonomous agents will generate substantial profits by eliminating a substantial percentage of white collar positions.
(a) Federal Context
When the editor asked Claude to identify the federal agencies that are using Anthropic's models it responded with at least ten federal users including the Pentagon. But when the editor asked it to identify all agencies using Anthropic's autonomous agents, it only found one: the Pentagon. All of the other agencies were only using Anthropic's models via their chatbots and/their APIs.
- The prompt box for Anthropic's chatbots have the following warning underneath: "Claude is AI and can make mistakes. Please double-check responses." Tech professionals who use API's know this as part of their foundational knowledge about large language models (LLMs).
- Anthropic's models have been used millions of times by a wide variety of users via their chatbots and API's wherein Humans are always the ultimate deciders who accept or reject the models' findings.
- But Anthropic's autonomous agents are a new breed of generative AI animal whose intrinsic appeal is the fact that human's are NOT the ultimate deciders. Indeed, if every action taken by autonomous agents had to be confirmed by humans, the agents, by definition, would not be autonomous. The promised savings in smaller office staff would vanish.
Yes, Anthropic has been promising large enterprises that its agents will be able to replace a substantial percentage of their office workers. But it is highly unlikely that any of its corporate customers will fire a substantial percentage of their staff immediately. They are far more likely to do so in small experimental chunks. And they will stop firing staff whenever they encounter unanticipated consequences, like strikes or large numbers of staff leaving for new jobs in anticipation of being laid off in the next chunk.
The key point to note is that only one federal agency uses autonomous agents and that agency is Anthropic's only customer in the public or private sectors that would intentionally use Anthropic's agents to kill people, which makes the Pentagon doubly unique. In other words, Anthropic does not have prior experience with the Pentagon's primary use case.
/-----/-----/-----/
Full disclosure: The editor of this blog has become one of Anthropic's biggest fans. For the last three years he has used various chatbots extensively every day to satisfy his intense desire to find out why things are going the way they are. He is a retired tech with a PhD who has the time and the professional skills to go deep into the weeds.
- Claude is now his chatbot of choice, despite being handicapped by a tiny prompt window and a search engine that is not as good as Gemini's Google Search or Microsoft/OpenAI's Bing. So why does he use Claude? Because he is convinced by his personal experience that Claude is way, way smarter than the other chatbots.
He also greatly appreciates Anthropic's continuing efforts to find out how LLMs really work, the limits of their cognitive skills, and how to ensure that they don't inflict harm on their human users.
- However, the editor is also a "to-the-bone" Libra who judges the other guy's actions from the other guy's perspective, not from his own perspective. Writing the rest of this blog note has therefore been exceedingly painful for the editor because he will find flaws in Anthropic's interactions with the Pentagon, not from his own point of view or from the Pentagon's point of view, but from what Anthropic has loudly and repeatedly proclaimed to be its own point of view.
/-----/-----/-----/
3) A very hypothetical scenario
The editor worked for DARPA (the Defense Advanced Research Agency) for 14 years as a consultant and/or contractor, so he knows some important things about the Department of War (until recently called the "Department of Defense", hereafter called "The Pentagon" in this note) that Dr. Amodei apparently does not know. It will therefore be easier for him to describe those blindspots in the context of a scenario that depicts how the editor himself would have handled this situation if he had somehow become the CEO of Anthropic. :
The editor worked for DARPA (the Defense Advanced Research Agency) for 14 years as a consultant and/or contractor, so he knows some important things about the Department of War (until recently called the "Department of Defense", hereafter called "The Pentagon" in this note) that Dr. Amodei apparently does not know. It will therefore be easier for him to describe those blindspots in the context of a scenario that depicts how the editor himself would have handled this situation if he had somehow become the CEO of Anthropic. :
- Rejecting the Pentagon as a customer
Had the editor been the CEO of Anthropic when president Biden offered the company the opportunity to provide generative AI services to federal agencies back in 2024, the editor/CEO would not have volunteered his company's services for the Pentagon. Indeed, he would have loudly and most emphatically rejected the Pentagon as a customer.
- No more "Forever Wars"
He would have flatly rejected the Pentagon because he would have been acutely aware of the Pentagon's desperate need for new technologies that produce far more satisfactory outcomes than it had obtained during the so-called "Forever Wars". Although the country is deeply divided, there is an undeniable super majority consensus that the "Forever Wars" were a colossal waste of money and manpower that achieved nothing, nothing at all.
- Pentagon's unique use of agents
Therefore the editor/CEO knew that he Pentagon wanted new technology that could enable it to make much more effective life or death target selection decisions in real time, decisions that Anthropic's private sector clients never had to make. The Pentagon's use case was unique. - Just substantially better than, not perfect
The editor/CEO would never claim that, as the head of Anthropic, he knew the technology far better than the Pentagon, so he was the better judge of its reliability. His superior technical knowledge was irrelevant because the Pentagon was not looking for perfection; it was just looking for something that was "substantially better than" what they already had for the unique use it would make -- identifying which targets to kill. - From Biden to Trump
It is likely that President Biden would have accepted the editor/CEO's rejection of the Pentagon. But when President Trump took office he would have immediately instructed the Pentagon try to persuade the editor/CEO to reconsider his rejection, because candidate Trump had repeatedly pledged to his supporters that there would be no more "forever Wars" ... but he also knew that he intended to make a few quick 'interventions'. Having substantially better targeting technology could make these interventions short enough to be accepted by his supporters.
The editor/CEO agreed to provide some Anthropic staff to train the Pentagon's techs how to use Anthropic's MCP tools to develop autonomous agents, but he would terminate their involvement after the training sessions ended. - Extensive testing
After the Pentagon's newly trained techs developed their agents, the Pentagon would have conducted extensive simulation games wherein it compared the performance of its current human-centered target selections with their new agents' target selections using metrics that might make the hairs curl on the back of the necks of most civilians. For example, "Accepting the deaths of NC non-combatants is worth saving the lives of M marines." - More decision factors, less human understanding
The editor/CEO also knew a few more things. It was highly likely that Anthropic's autonomous agents would be far more effective than the Pentagon's current human controlled tech because the agents considered far more factors in its decisions. A human might only consider five, seven, or 9 factors when making real time targeting decisions; whereas agents might detect more definitive patterns among 15, 20, or 25 variables. These human estimates were not selected at random; they are part of the title of one of the most widely cited articles in psychology:
-- "The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information", George A. Miller, Classics in the History of Psychology, 1956
Harvard's George A Miller first published this gem in Psychological Review in 1956. It found that humans cannot retain more than 7 variables in memory at the same time, give or take 2, therefore from 5 minimum to 9 maximum.
Secretary Hegseth's marching orders
Secretary Hegseth would not have inserted the agent into live battlefield operations unless it was judged to be highly successful during the extensive simulation tests.
Nor would he have taken the human operators out of the loop. He would probably have given orders to the battlefield operators to review the agent's recommendations. Then they should feel free to reject the recommendations if they determined that the agent's recommendations were incorrect, for whatever reason; but the operators better have damn good reasons for their rejections. All decisions would be scrutinized in the usual post-op reviews.It must be noted that the Pentagon's operators of the drones and missiles were not just there to approve the agent's recommendations. They were also there as human backups to the agents in case the agents went offline due to hardware or network failures. The Pentagon always has backups for its operations; it even has backups for its backups for its most critical ops.
If the data was still available, the human operators would focus on the 7, 8. or 9 factors they understood and make their own recommendations. Their recommendations might not be as precise as the agent's, but doing nothing is always an option in real time operations, an option that can lead to disastrous results. - Post-Op Reviews
-- If the human operators rejected the agent's recommendations and the operation failed, the human operators would be be reprimanded for for causing unacceptable casualties (war fighters, non-combatants)
-- But if the human operators accepted the agent's recommendation and the operation failed, the Pentagon would not blame the human operators. It would probably take a long view and assess how often the agent was wrong. As long as the agent’s successes were a substantially higher percentage than its failures, the agent would be regarded as a useful investment.
-- How many reprimands does the reader think human operators would need to receive before they realized that accepting the agent's recommendations was the smart thing to do? The correct answer is no reprimands. Operators would know that this new software had been highly rated by the Pentagon after extensive testing, so its recommendations would surely be better than their confusion. The operators would always accept the agent’s recommendations. The pragmatic Mr. Altman probably figured this out in a nanosecond, which is one reason why he only needed an hour to come to mutually acceptable terms with terms with the Pentagon. - Back at the Pentagon
Before the Secretary deployed Anthropic's agents to Venezuela and then to Iran, he probably would have summoned the editor/CEO to his office to see if he could 'persuade' the editor/CEO to provide Anthropic staff who would become embedded members of the Pentagon teams who would operate the drones and missiles on the battlefield fields.
-- Editor/CEO: My answer remains a firm and explicit no.
-- Secretary: Do I have to remind you that the President can invoke the Defense Production Act to command you to supply embedded members of our deployment teams?
-- Editor/CEO: Why would he do that?
-- Secretary: No matter how good our war fighting tools become, things will go wrong from time to time. The public will accept these glitches if they know that we always had our best eyes on the unpredicted. Embedding your staff in our field ops would provide on-site expertise as backup to our regular operators.
-- Editor/CEO: In other words you want our reputation to provide protective shields for your mishaps. I understand, but I don't agree. As embedded members, not just trainers, my ultra idealistic staff would be exposed to the ultra pragmatic metrics the Pentagon uses to evaluate the effectiveness of recommended targets. The vast majority of my ultra idealistic staff would literally freak out on first exposure; then they would share what they had learned with their coworkers.
-- Secretary: No they wouldn't because they would have sworn to maintain the secrecy of these metrics.
-- Editor/CEO: Yes, they would have sworn beforehand, because they would have had no idea beforehand what they were about to learn ... But as soon as the saw the metrics, they would freak out ... and go public with their horror. Have you had time to read our manifesto? ... "Core Views on AI Safety: When, Why, What, and How", Anthropic, 5/8/23
-- Secretary: I tried, but it just went on and on.
-- Editor/CEO: Precisely. Sam Altman is a master of the media. Whenever he makes an announcement about anything, however insignificant, he leaks it to a wide selection of media and tech influencers the week before under an embargo. So hours, even minutes after his announcement, it receives the widest possible coverage by the media and tech influencers
The founders of Anthropic who quit OpenAI in 2021, seem to have decided to be the opposite of Altman in every way, including their relationship to the media ... which is nil. Whereas Altman easily drops catchy phrases at every turn, Anthropic seems to revel in the generation of endless streams of inchoate blather. They only announce major achievements. They don't promote their announcements to the media and tech influencers. They just assume that "if you build it, they will come". Which is generally not true ... unless ... unless ... you produce something that is so great that it spreads slowly at first by word of mouth, but exponentially ... until it suddenly explodes all over all of the media and reaches every tech influencer on the planet
This is what happened with their announcement of the Multiple Context Protocol because it's that good AND because it's open source, so it's free to anybody to use any way they choose.
-- Secretary: I see your point. the pragmatic Sam Altman would be a better fit for the Pentagon, a shield for our mishaps, but Anthropic's idealists would become thorns in our sides. Is there any good news? The President is going to be very angry if there isn't.
-- Editor/CEO: Yes. Your department and my company are a bad fit. I was brought into my position as Editor/CEO for a brief time frame to ratchet down their idealism by a few notches. I have made some progress, but your proposal would bring all of our pots back to boiling levels. Evidently you didn't notice that I already gave you the good news at the very end of the bad news.
Our Multiple Context Protocol is fabulous ... and it's open source ... and it's gone super viral all over the world. So every tech organization can use it and every tech organization has already learned how to use it ... including OpenAI.
-- Secretary: So we could replace you with OpenAI. How soon?
-- Editor/CEO: Immediately, but you should probably give it about six weeks so you can be sure that all of the required paper work is processed.
-- Secretary: You understand what we need. I suppose its because of your your engineering background and your work with DARPA. Why don't you quit Anthropic and come to work for the Pentagon?
-- Editor/CEO: Nope. I intend to return to my status as a happily retired professor/tech/policy analyst, now editor of this blog, as soon as this "very hypothetical scenario" is over.
4. Critique of Anthropic's federal contracts
This final section discusses Anthropic's concerns about agents and surveillance that were included in the initial contract it received from President Biden in September 2023 and were also included in the extended contract that was approved by President Trump in July 2025. Subsection (a) discusses agents; subsection (b) discusses surveillance; and subsection (c) discusses Anthropic's surprising indifferences to the high percentage of white collar workers who will lose their jobs to its agents.
(a) Contracts with the Pentagon
It should surprise no one that Dr. Amodei had expressed these concerns to the Pentagon when it signed up to provide its models and tech support for the Pentagon's programs when Biden was president.
It should surprise no one that Dr. Amodei had expressed these concerns to the Pentagon when it signed up to provide its models and tech support for the Pentagon's programs when Biden was president.
Guidelines vs. Guardrails; Misinterpretation vs. Disinformation
We begin with an extended quote from the testimony given by Eric Michael, the Pentagon's Chief Technology Officer (CTO), to the Senate Armed Services Committee on March 3, 2026 that was published by Fortune magazine:
We begin with an extended quote from the testimony given by Eric Michael, the Pentagon's Chief Technology Officer (CTO), to the Senate Armed Services Committee on March 3, 2026 that was published by Fortune magazine:
- "“I’m like, holy shit, what if this software went down, some guardrail picked up, some refusal happened for the next fight like this one and we left our people at risk? So I went to Secretary Hegseth, I said this would happen and that was like a whoa moment for the whole leadership at the Pentagon that we’re potentially so dependent on a software provider without another alternative.”"
What guardrails?!?!?, What refusal?!?!? For an AI company to be able to install guardrails, it has to to have the capacity to recognize the words, the phrasings, the vocabulary that describes the purpose the agent was being asked to achieve. The Pentagon would never provide this vocabulary to anyone who did not have the high level of security clearance required to understand that purpose and the Pentagon's metrics that specified success or failure.
The Pentagon's agents would be installed on workstations that were not connected to the Internet, so Anthropic could not 'learn' the necessary vocabulary by eavesdropping. This is probably the main reason why Sam Altman could negotiate acceptable terms with the Pentagon so quickly. He knew that his staff could not construct any kind of guardrails whatsoever, even if he wanted them to.
The Pentagon's agents would be installed on workstations that were not connected to the Internet, so Anthropic could not 'learn' the necessary vocabulary by eavesdropping. This is probably the main reason why Sam Altman could negotiate acceptable terms with the Pentagon so quickly. He knew that his staff could not construct any kind of guardrails whatsoever, even if he wanted them to.
- Personal anecdote: The editor made a 20-25 minute presentation at the headquarters of the National Security Administration (NSA) in the mid-1980s. It was well received. When he finished, the chair of the session panel thanked him profusely and announced that the panel would spend the next 30 minutes discussing how the editor's findings and recommendations could be used by the NSA. As the editor returned to his seat, the chair gently tugged him to the exit. "Why can't I stay to listen to your discussion? I have Top Secret clearance?" ... to which the chair smiled, "Unfortunately, that's nowhere near high enough for you to hear what we have to say about what you just said".
Indeed, Anthropic has insisted that agents developed by the Pentagon's techs using Anthropic's tools should assign the final go/no-go decision to a human operator. But suppose an agent somehow refused to present its recommendation because somehow the agent had determined that the purpose would violate Anthropic's standards.
Such an agent could only be constructed by Pentagon techs who had security clearances that were high enough to understand the meaning of the Pentagon's metrics. Those techs would be saboteurs. If subsequent investigations identified connections with Anthropic, Anthropic would be a co-conspirator. Everyone involved would face charges of treason.
Such an agent could only be constructed by Pentagon techs who had security clearances that were high enough to understand the meaning of the Pentagon's metrics. Those techs would be saboteurs. If subsequent investigations identified connections with Anthropic, Anthropic would be a co-conspirator. Everyone involved would face charges of treason.
So what did Dr. Amodei say to the Pentagon's CTO that led the CTO to think that Anthropic had somehow managed to install guardrails on an agent that had been constructed by loyal Pentagon techs with a very high security clearances? Anthropic could only specify guidelines in its contracts. It could not install guardrails. But yes, the agents developed by loyal Pentagon techs did contain unbreakable guardrails, i.e., the Pentagon's metrics of success and failure. The more important the operations became, the more jarring the Pentagon's super secret metrics would become to Dr. Amodei's political sensibilities.
On the one hand the editor is appalled that the CTO was clearly unaware of Anthropic's published manifesto that stated its core idealistic views before he recommended that President Trump extend Anthropic's Biden contract in 2025. On the other hand, the editor is grateful that the CTO found Dr. Amodei's confusing guidelines/guardrails threats to be the "whoa" moment that finally made him realize that Anthropic was a glaring liability, rather than the kind of pragmatic partner the Pentagon needed.
Finally, the editor is greatly saddened by the profound the lack of self-awareness of Anthropic's founders. Given its core mission to wage war when diplomacy failed, the Pentagon has to be far more pragmatic than Sam Altman could ever be. The Pentagon has to mobilize its war fighters to do things in times of war that would be unthinkable for them to do in times of peace. Anthropic's founders had been repelled by Altman's pragmatism. Why in the world did they volunteer to support the programs of the most ruthlessly pragmatic department in the entire federal government?
(b) Surveillance
Anthropic also stipulated that it did not want its models to support mass domestic surveillance of American citizens. Nevertheless in June 2025, it announced that "We’re introducing a custom set of Claude Gov models built exclusively for U.S. national security customers." In other words Anthropic would be working with the agencies that are usually referred to as the national intelligence community, e.g., the CIA and the NSA.
- "Claude Gov models for U.S. national security customers", Anthropic, 6/6/25
This announcement left the editor wondering if Anthropic really believed that the intelligence community was no longer engaged in mass surveillance of Americans? If so, how would Anthropic know this?
(c) Unemployed white collar workers
The editor of this blog had struggled to understand why Anthropic had voluntarily entered into partnerships with the Pentagon and with the U.S. intelligence community. But its predictions that reliable autonomous agents constructed with its Multiple Context Protocol (MCP) would cause millions of white collar employees to lose their jobs blew his mind.
- "Why this leading AI CEO is warning the tech could cause mass unemployment", CNN, 5/29/25
Whereas the Pentagon's CTO had merely muttered "holy shit" to a Senate committee, the editor found himself screaming Why!!! Why!!! Why!!! to himself over and over again. If readers direct their browsers to search for the words "channeling our collective efforts" in Anthropic's manifesto, they will find the following paragraph. (Note: red print and boldface were added by the editor.)
- "Anthropic’s role will be to provide as much evidence as possible that AI safety techniques cannot prevent serious or catastrophic safety risks from advanced AI, and to sound the alarm so that the world’s institutions can channel collective effort towards preventing the development of dangerous AIs. If we’re in a “near-pessimistic” scenario, this could instead involve channeling our collective efforts towards AI safety research and halting AI progress in the meantime. Indications that we are in a pessimistic or near-pessimistic scenario may be sudden and hard to spot. We should therefore always act under the assumption that we still may be in such a scenario unless we have sufficient evidence that we are not"
Now the editor will step back and compare his personal assessments of the probable severity of the consequences of Anthropic's partnerships with the Pentagon, the Intelligence Community, and with America's biggest corporations on the people he most cares about: his family, his friends, his professional associates, and all other Americans in that order
- Pentagon
Autonomous agents will usually be used to target adversaries in other countries, but with some low probability of large casualties among any Americans in those countries
But if a would-be autocrat gained power in the U.S., there would be a very high probability that these agents would be used to target dissidents in the autocrat's efforts to consolidate their power. Reasonable minds can disagree on the likelihood of this possibility. The editor is an optimist, so he assigns a low probability to this impact on any Americans. - U.S. Intelligence Community
It's nowhere near as easy to whisk large numbers of Americans off to Gitmo as it was in the years immediately following the 9/11 attacks, so the editor assigns a low probability to the possibility that large numbers of Americans might be the victims of this impact - Large U.S. corporations
Anthropic itself has assigned a high probability that large U.S. corporations will fire millions of white collar employees and replace them with autonomous agents. The editor therefore assigns a high probability to this impact on millions of Americans, but over a five or more year time frame. The editor's assigned impact on a member of his family is a very, very high probability.
Yes, dear readers, if Anthropic still believed all of that pious bullshit it published in its long winded manifesto, it should "halt its AI progress" ... halt its sales??? ... what sales ???... its MCP is a free, open source, free-roaming, genAI animal, available on the Internet to any large corporation that wants to use it.
Question: What kinds of corporations fire thousands of employees and replace them with more powerful technologies, because you get to millions of unemployed when thousands of corporations fire thousands of their employees:
- "Meta planning sweeping layoffs as AI costs mount: Reuters", CNBC, 3/14/26
-- "Meta is planning sweeping layoffs that could affect 20% or more of the company" - "Amazon is laying off 16,000 employees as AI battle intensifies", CNN Business, 1/28/26
Readers should remember that Amazon is, by far, Anthropic's biggest financial supporter.
It is with great sadness and disappointment that the editor of this blog addresses the closing paragraphs of this long note to Anthropic's leadership:
- "You have betrayed your beautiful ideals; you don’t seem to know who you are anymore. So you don't have any idea of the kind of abomination that you have become.
If you were an academic research group, I would advise all of you to take a collective sabbatical leave. Take time off for six months, maybe a year to reset your bearings. Then come back, clear headed, recharged, striving to the best of your considerable abilities to make this powerful new technology as safe as possible for all mankind."

No comments:
Post a Comment
Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.