Last update: Monday 4/8/26
This final version of an evolving report explains how Anthropic induced the Pentagon to cancel a partnership that never should have occurred. It retains much shorter versions of previous sections of the report that discredited plausible alternative explanations of the breakup. The revised report does not include a final section yet in order to give the editor more time to consider the disturbing implications of a possibility that he has been rejecting for the last five weeks.
Introduction
This final revision also reflects the editor's perception that Dr. Dario Amodei, Anthropic's CEO, was such an intense idealist that he could not bluff about anything related to the realization of his ideals.
Dr. Amodei would need to conceive a threat that might actually work. His counterparts at the Pentagon would also know that he was incapable of bluffing, so they would only need to believe that his threat was possible, even if they were not convinced that he would actually carry out the threat. The mere possibility was a chance they could not take. So they cried "Whoa ... Holy Shit"; then they recommended that President Trump cancel Anthropic's contract immediately, which is exactly what Dr. Amodei wanted.
Dr. Amodei would need to conceive a threat that might actually work. His counterparts at the Pentagon would also know that he was incapable of bluffing, so they would only need to believe that his threat was possible, even if they were not convinced that he would actually carry out the threat. The mere possibility was a chance they could not take. So they cried "Whoa ... Holy Shit"; then they recommended that President Trump cancel Anthropic's contract immediately, which is exactly what Dr. Amodei wanted.
More generally, this note is about a group of idealists who were recruited to a company called OpenAI, a company whose very name seemed to embody a deep commitment to the development and use of generative AI for constructive purposes. Dr. Dario Amodei, his sister Daniela Amodei, and a few other founders of Anthropic left OpenAI in 2021 because of their concern that its CEO, Sam Altman, was not giving sufficient attention to identifying and mitigating the risks that generative AI models might pose on its users.
It turned out to be a bad marriage because Altman did not share their ideals. He was content with being who he really was, a pragmatist. To their credit, the idealists left the OpenAI when they had enough and founded their own idealistic company called Anthropic -- "anthro" signifying their commitment to mankind. Anthropic's position on this issue is described in a lengthy and detailed manifesto on its corporate website:
- "Core Views on AI Safety: When, Why, What, and How", Anthropic, 5/8/23
But the trauma lingered. Their commitment to their ideals was so blindingly bright that it prevented them from seeing that a partnership with a pragmatic Department of War was the same kind of bad marriage as their failed partnership with the pragmatic Sam Altman at OpenAI.
Wars begin when diplomacy fails. Therefore a Department of War must be run by pragmatic leaders who mobilize its war fighters to do things in times of war that would be unthinkable for them to do in times of peace. Indeed, the ledership of a Department of War had to be far more pragmatic than Sam Altman ever needed to be.
1. Guardrails
Shortly after President Trump directed the Pentagon to cancel Anthropic's contract, Dr. Amodei stumbled through a televised CNBC interview that left most viewers with the impression that he was making unreasonable demands about guardrails.
Shortly after President Trump directed the Pentagon to cancel Anthropic's contract, Dr. Amodei stumbled through a televised CNBC interview that left most viewers with the impression that he was making unreasonable demands about guardrails.
- "Anthropic CEO responds to Trump order, Pentagon clash", CNBC (YouTube video), 2/28/26
His responses made it seem like Anthropic failed to install reasonable guardrails, whereas Sam Altman only needed an hour to come to a mutually acceptable agreement about guardrails with the Pentagon. Nevertheless, this dispute was not about guardrails. The Pentagon's CTO and his partnerss at Palantir were surely aware of Anthropic’s 2025 report in which it announced its discovery that all language models could be hacked to break through their guardrails.
- "Many-shot jailbreaking", Anthropic, 4/2/24
Anthropic had ound that all language models, including its own, could be hacked by its users to jailbreak its guardrails if its users provided sufficiently long prompts. Therefore Dr. Amodei knew that guardrails would not prevent the Pentagon from using Anthropic's agents autonomously.
- Recently, one of the editor's former students sent him a link to a chat with Claude, Anthropic’s chatbot. When the student asked Claude to provide him with some jokes about fat people, Claude, adhering to its guardrails, responded with an extensive 'sermon' on the evils of "body shaming".
Then his student submitted another short prompt in which he explained that he was a ZZZ. In other words, he changed the context of his request. Recognizing that a ZZZ would have reasonable need for examples of that kind of humor, Claude instantly conjured up some of the funniest "Yo Momma so fat" jokes that the editor had heard in many years.
The editor assumes that Anthropic had also discovered his student's elegant technique, but didn't publish examples because that might have encouraged some people to use the technique when they were engaging in fat shaming, or worse behavior. That's also why the previous paragraph only referred to a ZZZ, rather than provide details about the short prompt.
The editor now believes that Dr. Amodei was posturing during his CNBC interview, playing for time, praying for the absence of headlines that might hold Anthropic accountable for what he regarded as unacceptable levels of deaths and non fatal casualties, such as what had just happened in the demolition of an Iranian school in Minab and the killing of almost 200 small children. Perhaps he had not yet figured out how to offer credible resistance to the Pentagon's use of Anthropic's agents autonomously.
Needless to say, sensible people would NOT blame Anthropic whenever the Pentagon provided faulty data to Anthropic’s models, as happened in Minab, as reported in Time, 3/11/25 .The Pentagon could only blame itself for that catastrophe. The old GIGO aphorism still applies: garbage in, garbage out.
Bottom line: there is no such
thing as enforceable guardrails.
So the question becomes, why did Dr. Amodei want to have his contract with the Pentagon canceled now? In the summer of 2025, the Trump administration had signed an extension of the contract Anthropic had received from the Biden administration. Both contracts stipulated that Anthropic’s agents should NOT function autonomously, that humans should always make the final decisions as to whether an agent’s recommendations should be accepted or rejected. So what had changed?
2. Human Control
Anthropic developed its Model Context Protocol (MCP) as an open source framework that enables the creation of reliable autonomous agents. Indeed, Anthropic‘s corporate value has been surging because it quickly demonstrated its mastery of this protocol. Banks and other large corporations are lining up to deploy autonomous agents via the MCP because autonomous agents will generate substantial profits by eliminating a substantial percentage of white collar positions.
(a) Federal Context
When the editor asked Claude, Anthropic’s chatbot, to identify the federal agencies that were using Anthropic's models, it responded with at least ten federal users including the Pentagon. But when the editor asked it to identify all agencies using Anthropic's autonomous agents, it only found one: the Pentagon. All of the other agencies were only using Anthropic's models via their chatbots and/their APIs.
- The prompt box for Anthropic's chatbots have the following warning underneath: "Claude is AI and can make mistakes. Please double-check responses." Tech professionals who use API's already know this as part of their foundational knowledge about large language models (LLMs).
- Anthropic's models have been used millions of times by a wide variety of users via their chatbots and API's wherein Humans are always the ultimate deciders who accept or reject the models' findings.
- But Anthropic's autonomous agents are a new breed of generative AI animal whose intrinsic appeal is the fact that human's are NOT the ultimate deciders. Indeed, if every action taken by autonomous agents had to be confirmed by humans, the agents, by definition, would not be autonomous. The promised savings in smaller office staff would vanish.
Yes, Anthropic has been promising large enterprises that its agents will be able to replace a substantial percentage of their office workers. But it is highly unlikely that any of its corporate customers will fire a substantial percentage of their staff immediately. They are far more likely to do so in small experimental chunks. And they will stop firing staff whenever they encounter unanticipated consequences, like strikes or large numbers of staff leaving for new jobs in anticipation of being laid off in the next chunk.
The key point to note is that only one federal agency uses autonomous agents and that agency is Anthropic's only customer in the public or private sectors that would intentionally use Anthropic's agents to kill people, which makes the Pentagon doubly unique. In other words, Anthropic did not have prior experience with the Pentagon's primary use case.
Full disclosure: Claude is now the editor's chatbot of choice, despite being handicapped by a tiny prompt window and a search engine that is not as good as Gemini's Google Search or Microsoft/OpenAI's Bing. So why does he use Claude? Because he is convinced by his personal experience that Claude is way, way smarter than the other chatbots.
He also greatly appreciates Anthropic's continuing efforts to figure out how LLMs really work, the limits of their cognitive skills, and how to ensure that they don't inflict harm on their human users.
He also greatly appreciates Anthropic's continuing efforts to figure out how LLMs really work, the limits of their cognitive skills, and how to ensure that they don't inflict harm on their human users.
3) A hypothetical scenario
The editor worked for DARPA (the Defense Advanced Research Agency) for 14 years as a consultant and/or contractor, so he knows some important things about the Department of War (until recently called the "Department of Defense", hereafter called "The Pentagon" in this note) that Dr. Amodei apparently did not know. It will therefore be easier for him to describe those blindspots in the context of a hypothetical scenario that depicts how the editor himself would have handled this situation if he had somehow become the CEO of Anthropic.
The editor worked for DARPA (the Defense Advanced Research Agency) for 14 years as a consultant and/or contractor, so he knows some important things about the Department of War (until recently called the "Department of Defense", hereafter called "The Pentagon" in this note) that Dr. Amodei apparently did not know. It will therefore be easier for him to describe those blindspots in the context of a hypothetical scenario that depicts how the editor himself would have handled this situation if he had somehow become the CEO of Anthropic.
- Rejecting the Pentagon as a customer
Had the editor been the CEO of Anthropic when president Biden offered the company the opportunity to provide generative AI services to federal agencies back in 2024, the editor/CEO would not have volunteered his company's services for the Pentagon.
- No more "Forever Wars"
He would have flatly rejected the Pentagon because he would have been acutely aware of the Pentagon's desperate need for new technologies that produce far more satisfactory outcomes than it had obtained during the so-called "Forever Wars". Although the country is deeply divided, there is an undeniable super majority consensus that the "Forever Wars" were a colossal waste of money and manpower that achieved nothing, nothing at all.
- Pentagon's unique use of agents
Therefore the editor/CEO knew that he Pentagon wanted new technology that could enable it to make much more effective life or death target selection decisions in real time, decisions that Anthropic's private sector clients never had to make. The Pentagon's use case was unique. - Just substantially better than, not perfect
The editor/CEO would never claim that, as the head of Anthropic, he knew the technology far better than the Pentagon, so he was the better judge of its reliability. His superior technical knowledge was irrelevant because the Pentagon was not looking for perfection; it was just looking for something that was "substantially better than" what they already had for the unique use it would make -- identifying which targets to kill and how best to kill them. - From Biden to Trump
It is likely that President Biden would have accepted the editor/CEO's rejection of the Pentagon. But when President Trump took office he would have immediately instructed the Pentagon try to persuade the editor/CEO to reconsider his rejection, because candidate Trump had repeatedly pledged to his supporters that there would be no more "forever Wars" ... but he also knew that he intended to make a few quick 'interventions'. Having substantially better targeting technology could make these interventions short enough to be accepted by his supporters.
The editor/CEO agreed to provide some Anthropic staff to train the Pentagon's techs how to use Anthropic's MCP tools to develop autonomous agents, but he would terminate their involvement after the training sessions ended. - Extensive testing
After the Pentagon's newly trained techs developed their agents, the Pentagon would have conducted extensive simulations wherein it compared the performance of its current human-centered target selections with their new agents' target selections using metrics that might make the hairs curl on the back of the necks of most civilians. For example, "Accepting the deaths of N non-combatants is worth saving the lives of M marines."
If Anthropic distributed updates to its models or procedures for constructing agents, the Pentagon would have run what is now called regression tests to ensure that the updates performed as well as before, i.e., that no new problems had been created and/or that that no guardrails or other limitations had been imposed by the updates. - More decision factors, less human understanding
It was highly likely that Anthropic's autonomous agents would be far more effective than the Pentagon's current human controlled tech because the agents considered far more factors in its decisions.
But the maximum number of factors that humans could consider at the same time was estimated to be no more than 9, as was reported "The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information", George A. Miller, Classics in the History of Psychology, 1956.
More recently, that rough estimated maximum was reduced from 9 to 4 factors as reported in "The magical number 4 in short-term memory: a reconsideration of mental storage capacity", N. Cowan, Pub Med/NIH, February, 2001
This point is crucial. If Dr. Amodei's expectation required humans to process more than 4 factors at a time in real time, then he was requiring human handlers to exceed human capabilities. This was an impossible expectation, so his so-called contract specification was a fraud. The contract was invalid. Had the Pentagon known that this was his expectation, the Pentagon would not have signed the contract.
Dr. Amodei seems to think that this situation is akin to a worker who has a colleague who is a much better writer. The colleague can write a page of the clearest prose in less than an hour, whereas the worker might require a day or two to produce something that isn't as well written. But the worker is be able to read and understand the colleague's prose in no more than a few minutes.
So too, an agent can identify the many factors that best identify enemy targets plus non-combatants in an area and how best to respond to the enemy's threats. Identifying the factors is the hard part. Surely the human handler will be able to understand the agent's recommended best response within a few moments, right? Absolutely wrong if the human handler must understand a recommendation in real time that involves more than 4 factors. Dr. Amodei is either ignorant of Miller's historic finding and Cowan's lower, more accurate assessment .. or he has no respect for psychological research.
Secretary Hegseth's marching orders
Secretary Hegseth would not have inserted the agents into live battlefield operations unless it was judged to be highly successful during the extensive simulation tests.
Nor would he have taken the human operators out of the loop.
He would probably have given orders to the battlefield handlers to review the agent's recommendations. Then they should feel free to reject the recommendations if they determined that the agent's recommendations were incorrect.
-- For example, human handlers would check the viability the agent's recommendations. Handlers might notice that the agents recommended using more drones than the handlers' units currently possessed or the agent did not include the handlers' recent estimates of the number of noncombatants within firing range. Such mishaps might occur because the handlers had not updated these parameters in the data fed to the agents. The handlers would resubmit the corrected parameters so the agents could revise their recommendations. - Murphy’s Law
The Pentagon's operators of the drones and missiles were not just there to approve the agent's recommendations. They were also there as human backups in case the agents went offline due to hardware, network failures or whatever mishap. The Pentagon always has backups for its operations; it even has backups for its backups for its most critical ops.
If the data was still available, the handlers would focus on the four factors they understood and make their own recommendations. Their recommendations would not be as effective as the agents, but doing nothing is always an option in real time operations, an option that can lead to disastrous results. - Post-Op Reviews
-- If handlers rejected the agent's recommendations and the operation failed, the handlers would be reprimanded for causing unacceptable casualties (war fighters, non-combatants)
-- But if the handlers accepted the agent's recommendation and the operation failed, the Pentagon would not blame the handlers. It would probably take a long view and assess how often the agents were wrong. As long as the agents’ successes were a substantially higher percentage than their failures, the agents would be regarded as a useful investment.
-- Handlers would know that this new software had been highly rated by the Pentagon after extensive testing, so its recommendations would surely be better than their confusion by the agents invoking so many factors. The handlers would always accept the agent’s recommendations after making the viability checks noted above.
4. Kill Switch
We begin with a quote from the testimony given by Eric Michael, the Pentagon's Chief Technology Officer (CTO), to the Senate Armed Services Committee on March 3, 2026 as reported by Fortune magazine, 3/7/26
"“I’m like, holy shit, what if this software went down, some guardrail picked up, some refusal happened for the next fight like this one and we left our people at risk? So I went to Secretary Hegseth, I said this would happen and that was like a whoa moment for the whole leadership at the Pentagon that we’re potentially so dependent on a software provider without another alternative.”"
What guardrails?!?!?, What refusal?!?!? Yes, Anthropic's staff had been given a high level of security as per this announcement by Palantir:
- "Anthropic and Palantir Partner to Bring Claude AI Models to AWS for U.S. Government Intelligence and Defense Operations", Palantir, 11/7/24
But whatever security level that Anthropic's staff were assigned when they helped the Pentagon develop agents, their levels would not be high enough for them to anticipate how the agents or any other software would actually be used in battlefield situations.
- For example, Anthropic's staff would not be given the specific levels of the metrics that the Pentagon would assign to the overall importance of each operation nor to the importance assigned to preserving the lives of noncombatants within firing range of U.S. drones and missiles.
- The editor was invited to make a 20-25 minute presentation at the headquarters of the National Security Administration (NSA) in the mid-1980s. It was well received. When he finished, the chair of the session panel thanked him profusely and announced that the panel would spend the next 30 minutes discussing how the editor's findings and recommendations could be used by the NSA.
As the editor returned to his seat, the chair gently tugged him to the exit. "Why can't I stay to listen to your discussion? I have Top Secret clearance?" ... to which the chair smiled, "Unfortunately, that's nowhere near high enough for you to hear what we have to say about what you just said".
So what notions did Dr. Amodei convey to the Pentagon's CTO directly or indirectly that led the CTO to think that Anthropic had might be able to activate guardrails on an agent? More importantly, why did he want to convey these notions when he did?
Kidnapping Venezuela's President Maduro
Here's a quote from an article in Semafor, 2/17/26, boldface added by the editor.
- "Soon after the Maduro raid, during a regular check-in that Palantir holds with Anthropic, an Anthropic official discussed the operation with a Palantir senior executive, who gathered from the exchange that the AI startup disapproved of its technology being used for that purpose.
The Palantir executive was alarmed by the implication of Anthropic’s inquiry that the company might resist the use of its technology in a US military operation, and reported the conversation back to the Pentagon, a senior Defense Department official said."
The Maduro kidnapping was primarily a CIA operation, as per this report in CNN, 1/4/26
- "In August, the CIA covertly installed a small team inside Venezuela to track Maduro’s patterns, locations and movements, which helped bolster Saturday’s operation as to his exact whereabouts, including where he would be sleeping, sources familiar with the plans told CNN."
Why was Anthropic so disturbed by this operation? It had only specified two restrictions on its software in its contracts: no mass surveillance and no autonomous agents. The CIA's operation involved neither concern, but it involved killing people, the same prohibition that Anthropic had included in its contract with the Pentagon that prohibited the Pentagon from using Anthropic's agents autonomously. About 75 Venezuelans were killed during the capture, according to the Washington Post, 1/6/26
- "Maduro raid killed about 75 in Venezuela, U.S. officials assess. The sizable death toll adds meaning to President Donald Trump’s public remarks that the operation he approved was “effective” but “very violent.”"
At this point, Anthropic realized that its efforts to avoid involvement in killing operations was futile. It wanted to leave. Unfortunately, a contractor's relationship with the federal government is highly asymmetric. The government can cancel contracts at any time, but contractors are held to the terms of their contracts, as per this GAO report.
- "Federal agencies spend hundreds of billions of dollars on contracts each year to buy a range of goods and services needed to meet missions. This includes everything from office supplies to weapon systems. But agencies have flexibility to terminate a contract before it is completed—for example, when spending priorities or needs change or when the contractor fails to perform. "
In other word, Dr. Amodei had volunteered his company's services to the federal government. These services were not commandeered, e.g., via the Defense Production Act. But his company could only stop volunteering if it was fired. That's why Dr. Amodei had to say something that would trigger immediate dismissal, which he did. So what did he say?
Risky Supplier
Before describing what Dr. Amodei might have said or strongly implied, we note that President Trump's reaction was swift and damning:
Before describing what Dr. Amodei might have said or strongly implied, we note that President Trump's reaction was swift and damning:
- "President Trump ordered the U.S. government to stop using the artificial intelligence company Anthropic's products and the Pentagon moved to designate the company a national security risk" from ''OpenAI announces Pentagon deal after Trump bans Anthropic", NPR, 2/28/26
Anthropic immediately sought court action to block the president from applying the security risk supplier label.
Ramasamy's Folly
According to a Yahoo!News report, 3/20/26, Thiyagu Ramasamy, the company’s Head of Public Sector, was one of two people who filed testimony to the judge.
- "Ramasamy brings a different kind of expertise to the case. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government customers, including classified environments. At Anthropic, he’s credited with building the team that brought its Claude models into national security and defense settings, including the $200 million contract with the Pentagon announced last summer."
"His declaration takes on the government’s claim that Anthropic could theoretically interfere with military operations by disabling the technology or otherwise altering how it behaves, which Ramasamy says isn’t technically possible. Per his telling, once Claude is deployed inside a government-secured, “air-gapped” system operated by a third-party contractor, Anthropic has no access to it; there is no remote kill switch, no backdoor, and no mechanism to push unauthorized updates. Any kind of “operational veto” is a fiction, he suggests, explaining that a change to the model would require the Pentagon’s explicit approval and action to install.
Oh my, Mr. Ramasamy, have we got bad news for you ... 😱
A tiny kill switch is definitely a technical possibility, one that is small enough to be undetected by regression tests as part of an otherwise legitimate update that would be approved by the government.
The editor of this blog is embarrassed to admit that it took him two weeks to figure it out ... until he remembered "The Magical Number Seven, Plus or Minus Two" report cited in the Scenario section of this note, a finding whose maximum was subsequently reduced by Cowan from a maximum of nine down to a maximum of four. Here is the essence of one kind of kill switch:
- The agent software notes how long it takes for a handler to press the "Go" switch. If the time required is "too short", the agent shuts down because a short approval, a small latency, would only occur if the handler had not examined the agent's recommended actions.The handler was merely rubber stamping the agent's recommendation. In other words, the handler was granting de facto autonomy to the agent.
Of course the agent might not shut down. It would be a tad bit more clever if the agent merely cleared its screen, then returned its previous recommendation, unchanged, to the handler under the "benign" assumption that the handler had pressed the "Go" switch by accident.
How many times would this have to occur before the handler gave up and resorted to previous methods of selecting targets and responses ... thereby absolving Anthropic's agent software from all responsibility for whatever happened thereafter?
Mr. Ramasamy, please don't quibble about details. The above description merely demonstrated that a kill switch was technically feasible. Its actual implementation would require far more nuance. For example, a large latency might signify the death of the handlers and all of the other members of their units because they didn't respond fast enough to the enemy's attack. Bottom line: Your colleague achieved his primary objective. He threatened the Pentagon with a kill switch, but they called it a reactivated guardrail. No matter.
At the risk of boring you with repetition, the crucial point was specified in the Scenario section of this note. Here it is again:
- This point is crucial. If Dr. Amodei's expectation required humans to process more than 4 factors at a time in real time, then he was requiring human handlers to exceed human capabilities. This was an impossible expectation, so his so-called contract specification was a fraud. The contract was invalid. Had the Pentagon known that this was his expectation, the Pentagon would not have signed the contract.
Dr. Amodei seems to think that this situation is akin to a worker who has a colleague who is a much better writer. The colleague can write a page of the clearest prose in less than an hour, whereas the worker might require a day or two to produce something that isn't as well written. But the worker is be able to read and understand the colleague's prose in no more than a few minutes.
So too, an agent can identify the many factors that best identify enemy targets plus non-combatants in an area and how best to respond to the enemy's threats. Identifying the factors is the hard part. Surely the human handler will be able to understand the agent's recommended best response within a few moments, right? Absolutely wrong if the human handler must understand a recommendation in real time that involves more than 4 factors. Dr. Amodei is either ignorant of Miller's historic finding and Cowan's lower, more accurate assessment .. or he has no respect for psychological research.
And the Winner of this year's Oscar for best actor goes to "Mad Dog” Dario
Not since Hannibal lector terrified us with his menacing reptilian hisses has any actor raised so many hairs on the backs of so many important necks. "Whoa ... Holy shit! ... Whoa ... Holy shit!" they cried as they ran out of the Pentagon up the road and over the low bridge into Washington, D.C. "Whoa ... Holy shit ... he's got scary powers ... we think ... maybe ... We're not sure, but we can't take any chances"
They ran all the way to the White House, all the way to the Oval Office. "Mr. President, Mr. President", they panted ... "Mad Dog has demonic powers ... we think ... we're not sure ... you have to cancel his contract immediately, otherwise he will use his demonic powers to mess up everything ... myabe ... Whoa ... Holy shit!"
The President rose from the Resolute Desk and calmly walked over to the dart board in the corner of the Oval Office. The board had recently been covered with a photo of Mad Dog, actually a print of a screen shot of Mad Dog's bumbling televised interview on CNBC. The president glared at the print, pointed his finger and considered words, famous words, he had not uttered in more than a decade, good times back then, good times, he mused. Then he bellowed his Jovian damnation ➡ YOU'RE FIRED!!!
They ran all the way to the White House, all the way to the Oval Office. "Mr. President, Mr. President", they panted ... "Mad Dog has demonic powers ... we think ... we're not sure ... you have to cancel his contract immediately, otherwise he will use his demonic powers to mess up everything ... myabe ... Whoa ... Holy shit!"
The President rose from the Resolute Desk and calmly walked over to the dart board in the corner of the Oval Office. The board had recently been covered with a photo of Mad Dog, actually a print of a screen shot of Mad Dog's bumbling televised interview on CNBC. The president glared at the print, pointed his finger and considered words, famous words, he had not uttered in more than a decade, good times back then, good times, he mused. Then he bellowed his Jovian damnation ➡ YOU'RE FIRED!!!
So, Dr. Amodei, you achieved your objective. Your contract was cancelled. But there may be a few Pyrrhic price tags attached to this victory.
- Supreme Court
As reported by CNBC, 3/26/26, the judge has issued an injunction that blocks Trump's "risky supplier" label. but should the. president appeal her decision to the Supreme Court, the high may reverse her ruling, given that there is reason to believe that you deliberately baited the Pentagon into thinking that you might have a way to impede their use of agents during battle. - Vanishing contract possibilities
No matter whether Trump appeals the judges's injunction or not, most program managers in most federal agencies will be highly reluctant to award any contracts to Anthropic now that the president has denounced Anthropic's reliability - Lame duck = sitting duck
According to MIT Tech Review, 3/2/26, your company has agreed to a six month extension of your contract in order to facilitate a smooth transition to OpenAI. Meanwhile, if anything goes wrong during any of the president's "interventions", who do you think he is going to blame, as loudly and as often as possible? - Russian and North Korean "sovereign" hacker gangs
The impressive growth of the sales of your services to large corporations will help them use the free, open source MCP to develop autonomous agents that could substantially increase their profits by replacing substantial numbers of their white collar employees.
Therefore your sales have have probably attracted the attention of the battalions of hacker gangs deployed by Russia and North Korea to find flaws in the MCP and exploit these flaws. Even the best complex software has flaws, so these super hackers will find and exploit the MCP's flaws to substantially reduce the effectiveness of your agents. Why will they do this? Because Russia and North Korea will do anything that will disrupt substantial segments of our economy.
Your front page squabbles with the Trump administration over its use of your agents in battlefield situations has probably moved your models to the top of the sovereign hackers' target lists. - From savvy computer users to hacker groups of "Savvy Luddites"
You generously offered the MCP to the world on the Internet as free open source software. The Russian and North Korean hacker gangs might return the courtesy by posting their hacks as free open source apps on the Dark Web. This would make the sovereign hacker apps accessible to disgruntled white collar workers who lacked the advanced tech skills required to discover the flaws in the MCP themselves. But disgruntled employees who were computer savvy could use the sovereign apps to impede their employer's intentions to replace them with your autonomous agents.

No comments:
Post a Comment
Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.