Dr. Amodei is also a brilliant man who was educated at three of the best universities on the planet: Caltech, Stanford, and Princeton. So what accounts for his glaring spots of ignorance?
(a) Bezos, the benign patron, bought the Washington Post in 2013
Bezos saved the Washington Post from bankruptcy. The Post hadn't found the survival strategy that the New York Times, Wall Street Journal, and other major papers worked out against the internet threat. Bezos made substantial investments, gave editorial staff substantial autonomy — behaved as a benign patron.
- ''Washington Post closes sale to Amazon founder Jeff Bezos", Washington Post, 10/1/2019
Among many other things, he blocked the Washington Post's endorsement of Kamala Harris in late October, 2024; he pledged $1 million to the inauguration committee; and he paid $40 million for Melania Trump's documentary.
- "Amazon Has Found the Easiest Way to Influence the Trump Administration", New Republic, 1/7/25
- "Jeff Bezos’ Washington Post Cuts 30% of Staff, Laying Off More Than 300 Employees Including Amazon Beat Reporter", Variety, 2/4/26
(c) Amazon always focused on genAI models as revenue generators for Amazon Web Services (AWS)
The editor of this blog is unaware of Amazon ever making "visionary" promises to develop artificial general intelligence, super intelligence, or any other kind of artificial intelligence entity based on large language models. Amazon invented the cloud services in the early 2000s when Bezos was CEO, and has maintained this focus under Andy Jassy, its current CEO.
Indeed, Amazon quickly developed a new component of AWS called "Bedrock" in April 2023 (GeekWire, 3/22/26) as a site where developers could display and develop a wide range of models. Here's a copy of Amazon's current description:
- "Amazon Bedrock powers generative AI for more than 100,000 organizations worldwide—from startups to global enterprises across every industry. It provides the proven infrastructure and comprehensive capabilities to confidently build applications and agents that work in production with the flexibility, enterprise security, and proven scalability you need to innovate boldly and deliver AI that drives real business impact."
Amazon's SEC report for fiscal 2024 (2/6/25) includes two retail components -- North America and International -- plus AWS. Here's a direct quote from the report:
- "Operating income increased to $80.0 billion in 2025, compared with $68.6 billion in 2024..
-- North America segment operating income was $29.6 billion, compared with operating income of $25.0 billion in 2024
-- International segment operating income was $4.7 billion, compared with an operating income of $3.8 billion in 2024
-- AWS segment operating income was $45.6 billion, compared with operating income of $39.8 billion in 2024".
- Net sales increased 12% to $716.9 billion in 2025, compared with $638.0 billion in 2024. Excluding the $4.4 billion favorable impact from year-over-year changes in foreign exchange rates throughout the year, net sales increased 12% compared with 2024.
-- North America segment sales increased 10% year-over-year to $426.3 billion.
-- International segment sales increased 13% year-over-year to $161.9 billion, or increased 10% excluding changes in foreign exchange rates.
-- AWS segment sales increased 20% year-over-year to $128.7 billion
If this pessimistic assessment is validated, will Amazon be prepared tto compensate the staff it replaces with robots with paid retraining or financial compensations? Probably, that's the logical pragmatic response, but only if all other employers who replace staff with robots are also required by law to do so.
d) Amazon's Plans
-- "Now, interviews and a cache of internal strategy documents viewed by The New York Times reveal that Amazon executives believe the company is on the cusp of its next big workplace shift: replacing more than half a million jobs with robots"
-- ''Executives told Amazon’s board last year that they hoped robotic automation would allow the company to continue to avoid adding to its U.S. work force in the coming years, even though they expect to sell twice as many products by 2033. That would translate to more than 600,000 people whom Amazon didn’t need to hire."
-- "At facilities designed for superfast deliveries, Amazon is trying to create warehouses that employ few humans at all. And documents show that Amazon’s robotics team has an ultimate goal to automate 75 percent of its operations."
-- "Amazon is so convinced this automated future is around the corner that it has started developing plans to mitigate the fallout in communities that may lose jobs. Documents show the company has considered building an image as a “good corporate citizen” through greater participation in community events such as parades and Toys for Tots."
-- "The documents contemplate avoiding using terms like “automation” and “A.I.” when discussing robotics, and instead use terms like “advanced technology” or replace the word “robot” with “cobot,” which implies collaboration with humans."
-- "Amazon’s plans could have profound impact on blue-collar jobs throughout the country and serve as a model for other companies like Walmart, the nation’s largest private employer, and UPS."
-- “Nobody else has the same incentive as Amazon to find the way to automate,” said Daron Acemoglu, a professor at the Massachusetts Institute of Technology who studies automation and won the Nobel Prize in economic science last year. “Once they work out how to do this profitably, it will spread to others, too.”
-- "There are concerns automation could affect people of color particularly hard because Amazon’s warehouse workers are about three times as likely as a typical American worker to be Black."
The editor of this blog drafted an earlier version of this discussion in an effort to understand/explain the underlying causes of Anthropic's failed partnership with the Pentagon. A highly condensed description of that failure can be found in the Pentagon Appendix to this report. His original data driven report flatly contradicted the unfounded emotional "support" that Dr. Amodei still receives from the vast majority of his colleagues in Silicon Valley.
At a press briefing in 2002, Donald Rumsfeld, Secretary of Defense, came up with a clever formulation.
- Known knowns — There are important things that we need to know, and we know them. Rumsfeld called these the “known knowns”. For example, students will not be admitted to a course unless they can demonstrate that they have already acquired a certain body of knowledge, the course prerequisites.
- Known unknowns — There are important things that we know that we should know, but we don’t know them yet. Rumsfeld called these things “known unknowns”. For example, instructors usually provide a syllabus that specifies the knowledge students do not know at the beginning of a course but should know by the end of their course. However, students who do not acquire an understanding of most of this specified knowledge by the end of the course will fail the course because they did not convert their known unknowns into known knowns.
- Unknown unknowns — Rumsfeld identified a third category of knowledge, important things that we should know but we don’t even know the names of such knowledge yet. This is the stuff of what some people call “black swans”.
Most freshmen don't know much about anything; it's all a big bag of unknown unknowns. So who decides what should be taught? How should this knowledge should be be divided into courses? And when should those courses be taught? These decisions are made by the faculty, of course.
But colleges and universities long ago learned not to take the aspirations of entering freshmen too seriously. Young students often change their minds. Indeed, changing one's mind after making wrong choices is one of the golden prerogatives of youth that should not be denied.
Therefore the curriculum should enable the youngest students to change their aspirations without incurring substantial penalties, e.g., added tuition or extra time to graduate. Faculty strive to identify a broad range of knowledge that every college educated graduate should acquire. Courses that satisfy these general requirements are usually offered to freshmen and sophomores. Most of the courses in selected majors are offered to juniors and seniors.
But the process is more complex and surprisingly so. The youngest students not only learn from the faculty; they learn from each other and they learn from older students, especially from juniors and seniors in other majors than they have been considering.
Personal anecdote: When the editor was a sophomore engineer, he became friends with a junior, a sociology major who already seemed to know more sociology than the professors, but who also knew less than 5 percent as much math as he did. He teased his friend to tell him about some great discovery made by sociologists using kiddy math. His friend's response: read Emil Durkheim's late 19th century classic study "Suicide".
Suicide is perhaps the most personal decision a person could make, so one would expect that rates of suicide only varied by differences in people's personal psychologies. But using grade school arithmetic, Durkheim found that the rates of four different kinds of suicide also varied by social conditions, e.g., marital status (unmarried), oppressive conditions (prisons), social disruption (extreme gain or loss of wealth), and altruism (military suicide in defense of others).
About fifteen years after reading Durkheim, the editor submitted his PhD dissertation, a sociometric examination of the communication networks among planners and engineers. Most of his references were to previous studies in social psychology.
We begin with a quick review of Dr. Amodei's stellar academic background. We note his enrollment at the highly selective California Institute of Technology ("Caltech") in 2001 and his one year pause in his undergraduate studies before he transferred to Stanford University in the fall of 2004. Then we offer a careful analysis of a compelling anti-war op-ed that he published a few months before he left Caltech.
Dario Amodei was admitted in the Fall 2001. Readers should understand that his admission was a personal milestone because Caltech is arguably the nation's most selective top tier institution of higher education. Its current total enrollment of 900 students is less than half the size of Harvard's freshman class. The math SAT scores of 50 percent of its freshmen range between 780 and 799. Twenty five percent score a perfect 800.
According to Clay, 9/13/24, Amodei left Caltech at the end of his sophomore year to take a position at Applied Minds as a Research Intern, July 2003 to September 2003; then he worked at Schlumberger as a Geophysicist, February 2004 to September 2004. He entered Stanford in the Fall 2023 semester and graduated in 2006 with a BS in Physics.
Anti-war OpEd (pdf)
Readers should click the above link to verify every assertion in this section. Having read many of Dr. Amodei's convoluted essays on Anthropic's website, the editor of this blog was shocked to find that 19 year old Dario wrote this fiery call to activism with such stunning clarity.
Dario had challenged his intended readers, his fellow students, to recognize their responsibility as the nation's future leaders in science and technology:
- "All the great idols of Caltech, from Richard Feynman to Linus Pauling, have understood the need to be citizen-scientists, to contribute their analytical skills to the enormous forum that is our democracy. We who seek to emulate them scientifically should also do so politically. We should never let ourselves be reduced to amoral technicians who run the machines of war as casually as we do our computations."
Now comes the analysis. Back then, the editor of this blog was about twenty years older than Dr. Amodei is today. So his perspective was uncertain. Attacking Hussein in response to Ben Ladin's attacks on the World Trade Center and the Pentagon made no sense. And it was widely known that President Bush had been holding a personal grudge against Hussein ever since Hussein organized an attempted assassination of his father, President George H. W. Bush, on a visit to Kuwait in 1993 after he had left office.
Nevertheless, Secretary of State Colin Powell's enthusiastic support for this response in his speech before the United Nations caused editor to nod in silent agreement, just as it commanded similar silent agreement among millions of other Americans.
Newly elected President George W. Bush was counting on this reaction. That's why he and his colleagues concocted a complex set of bold faced lies that convinced his unsuspecting Secretary of State that Hussein had a nuclear arsenal, which Hussein definitely did not have. Many months after our successful "conquest" and our continued failure to find these nuclear weapons of mass destruction (WMDs), the truth came out. Powell was humiliated into retirement as millions of hitherto silent Americans roared their anger at this outrageous presidential deception.
Hussein did have WMDs, but they were't nuclear. He had been slaughtering thousands of Iranians and Kurds for years with chemical and biological WMDs, chemical WMD equipment that he had purchased from western companies. According to Wikipedia, "On September 22, 1980, Iraq staged an all-out war on Iran from ground, air, and sea and came to occupy a vast part of Iranian territory .... February 1984 to the end of the war [1991], chemical weapons were used extensively.All told, 52% of Iraq's international chemical weapon equipment was of German origin."
Hussein had pretended to have nuclear WMDs in order to deter Iranians hordes from swarming over the border in retaliation.
The CIA was in on the deception of Secretary Powell. Indeed, the CIA Director provided the most convincing confirmation of the president's lies. Of course the CIA knew all about Hussein's chemical WMDs and their western manufacturers. Nevertheless, young Dario naively included the following CIA denial in his op-ed: "A CIA report commented that Saddam was unlikely to use weapons of mass destruction or seek collusion with terrorist groups unless he was attacked." ... Rubbish.
So where was young Dario when the lies the Bush administration had told to Secretary Powell were publicized? He was at Stanford. There is no evidence that he joined the now loudly protesting angry millions all over the U.S. who would ultimately declare these "Forever Wars" to be wasteful tragedies. Young Dario had evidently become what he had condemned a few years earlier, a silent nerd, focused on his studies.
Had young Dario entered Stanford as a freshman, he would have encountered many students who were as outraged as he was by the forthcoming assault on Iraq, including some super smart aspiring young scientists, like himself. Instead he dropped in as a junior and missed the frequent interactions students have with other students in the general courses they take as freshmen and sophomores who had a wide range of aspirations
So what did he miss? Had he started at Stanford instead of Caltech, he would have been on Stanford's campus when large scale anti-war demonstrations erupted. According to Stanford Magazine,
- "On March 5, more than 500 students and faculty converged on the Quad to protest and to attend teach-ins offered by 20 professors from 10 departments. Biologist Robert Sapolsky spoke about “Evolution of Aggression and Warfare,” anthropologist Carol Delaney recalled a trip to Iraq, and political scientist Terry Karl, ’70, MA ’76, PhD ’82, addressed the links between “Oil and the War in Iraq.”"
With a bit of luck, he might have encountered some super smart juniors and seniors who would have blunted his snobbish belief that the only real sciences are the physical sciences -- physics, chemistry, biology, and combinations thereof. For example, he might have learned that psychologists have discovered surprising limitations on human capacity to deal with complexity in real-time. Aware of these scientific findings, he would not have specified impossible conditions on the use of his models in the Pentagon's battlefield operations. Better still, he would have volunteered his company's models to the State Department, instead of the Pentagon.
Instead, young Dario has grown up to become the Dr. Amodei we see today, the ultra idealist who sincerely believes that he carries a lonely burden, that anyone who disagrees with him must not be telling the truth.
Unfortunately, this is a two way street. Colleges and universities have irrefutable evidence that most students who use genAI chatbots earn lower grades because they don't learn as much as they should, i.e, they don't convert enough known unknowns into known knowns by their courses' ends. More generally, chatbots impair their student users’ capacities to engage in critical thinking.
Worse still, their faculty know that large language models models are not based on any underlying cognitive science, no theory of intelligence that can explain high intelligence in tiny brained crows and also account for the different brain to body mass ratios in creatures as varied as crows, elephants, dolphins, humans, and killer whales.
Perhaps faculty aversion to genAI models was also intensified by DARPA's unfortunately premature negative assessment of this technology in this YouTube video nine years ago, wherein its director flatly asserted that LLM's have virtually no capability for logical/deductive reasoning. Fortunately, DARPA has since reversed this assessment and launched impressive projects that fully exploit the logical/deductive capabilities of LLM's. Two of these projects will be noted in later sections of these notes.
Reactions vary, but some colleges and universities have been as zealous and as counterproductive as Dr. Amodei's ultra idealism. Their faculty are deeply disturbed by the tangible harm these baffling collections of non-scientific software are doing to their students, so their reactions have been almost medieval.
- "Introducing the Model Context Protocol", Anthropic 11/25/24
- "Why this leading AI CEO is warning the tech could cause mass unemployment", CNN, 5/29/25
- "Anthropic’s role will be to provide as much evidence as possible that AI safety techniques cannot prevent serious or catastrophic safety risks from advanced AI, and to sound the alarm so that the world’s institutions can channel collective effort towards preventing the development of dangerous AIs. If we’re in a “near-pessimistic” scenario, this could instead involve channeling our collective efforts towards AI safety research and halting AI progress in the meantime. Indications that we are in a pessimistic or near-pessimistic scenario may be sudden and hard to spot. We should therefore always act under the assumption that we still may be in such a scenario unless we have sufficient evidence that we are not"
Deleting its obsolete promises from its manifesto would not be enough. Anthropic should have invested its considerable skills into proactive partnerships with universities and other training operations that might identify the likely new skills required for the new job opportunities.
- For example, as reported in TechCrunch, M.I.T. and some other universities have responded to dramatic drops in enrollments in their computer science programs by offering joint degrees with other majors wherein genAI skills could be applied.
-- "The great computer science exodus (and where students are going instead)", TechCrunch, 2/15/26
- Today's large Language models (LLMs) produce good summaries. They also have good deductive skills so they can help researchers to suss out defects in the logic of their hypotheses; but current models are lousy at inference, even Claude.
- On the other hand data scientists providing support for researchers in just about every field have found that 70 to 80 precent of their effort is devoted to cleaning up the data. This is tedious, time consuming work and seldom requires ingenuity.
- Only the most highly talented and highly skilled students will obtain joint degrees that will empower them to create customized autonomous agents for their projects that will perform all of the tedious data preparation, thereby providing the students, now researchers, more time to focus on the concepts they are trying to creatively elucidate. In other words, researchers will do what agents can't; agents will do what would otherwise waste the researchers' time.
- But colleges and universities that offer data science degrees will have an easier path. Their data science majors could be required to take courses that taught them how to use the MCP to develop autonomous agents.
Students obtaining degrees in other fields could be given additional statistics courses that introduced them to the data cleaning process and alerted them to the advantages of working with an appropriately trained data scientist who could develop the kinds of autonomous agents that would be most useful to them
But young techs are at most 10 percent of the lowest level white collar staff employed by all large corporations. If only 1,000 large corporations replaced 1,000 low level white collar staff, one million jobs will be lost. Why would Anthropic want to use its AI technology for such harmful purposes?
Members will be given extensive access to Anthropic's models via APIs on Amazon's AWS, Microsoft's Azure, and Google's cloud. In other words, Anthropic should anticipate heavy usage and substantial income from these activities. And it raises questions:
- China's Deep Seek "distilled" OpenAI's GPT models via extensive access. Given the leaked code plus extensive access, all of the wealthy members of the consortium should be able to distill a model having comparable effectiveness to Mythos.
- The editor of this blog is far more concerned by the possibility that sovereign hacker groups from China, North Korea, and Russia will gain extensive access to Mythos -- through bribery or blackmail -- of personnel on the staff of one of the consortium's 50 members. Then our adversaries will distill models that are comparable to Anthropic's most powerful model ... models that can hack large enterprise infrastructure ...
... But Mythos can also be useful across a wide range of military and intelligence applications, like the applications discussed in previous sections of this report: the CIA's kidnapping of President Maduro and the Pentagon's agentic targeting of adversary units on battlefields -- applications which Anthropic renounced ... for America ... But when our adversaries distill models comparable to Mythos, will Anthropic once again deny all responsibility .... and will Silicon Valley loudly cheer ... or is anything really a secret when you share it with 50 organizations?
Indeed, no one needs access to Anthropic's most powerful model to discover flaws in infrastructure. That challenge was met by the winners of a DARPA/ARPA-H Challenge last summer 2025, as noted in the following bullet. - "Challenge showcases AI’s power to secure America’s health care", ARPA-H, 9/4/25 ... "At DEF CON 33, ARPA-H joined the Defense Advanced Research Projects Agency (DARPA) to announce the winners of the AI Cyber Challenge (AIxCC), a two-year competition to develop AI-enabled software that automatically identifies and patches vulnerabilities in the source code that underpins critical infrastructure." .... Hmmm ... sounds like Anthropic's new Mythos
-- The first academic assessment of the winning software was recently posted on ArXiv, February, 2026
-- The winning software can be found on GitHub (It's open source)
-- Anthropic was one of the corporate sponsors of this challenge, so its representatives were aware of the winners' impressive achievements.
Based on the findings in this report, the editor of this blog offers a few recommendations to the California Institute of Technology (Caltech), to Anthropic, and to Anthropic's vociferous supporters in Silicon Valley, all of which the same message: broaden your fields of vision.
-- First, the editor appeals to DARPA:
Nevertheless, your public denunciation of the logical/deductive powers of large language models in a YouTube video nine years ago, broke your usual silence about failed technologies. Indeed, the rapid evolution of LLM's since then changed your mind, so much so that you have already achieved considerable success in a few projects that utilize these capabilities.
Accordingly, the editor of this blog strongly recommends that you issue a public revision of your assessment of LLMs that will assure America's science and tech communities that it is well worth their while to reconsider these capabilities and to pay close attention to your own impressive use of LLMs.
This project promises to become one of DARPA's most important projects ever. It implements notions of software as provable theorems that were first introduced by Edsger W. Dijkstra, Tony Hoare, and Haskell Curry & William Alvin Howard, but were only implemented within the cyber community because of prior sky high implementation costs.The costs of failed security were high enough for cyber clients, e.g., clients who would otherwise incur massive financial losses if their security failed, so the higher development costs were cost-effective.
When DARPA releases the the free open source technologies that emerge from CLARA, the costs of implementing guaranteed cyber safety should be drastically reduced for all clients. In other words, Mythos today, but CLARA not too long thereafter.
In more detail, CLARA is anticipated to create powerful methods for the hierarchical, fine-grained, highly transparent composition of important kinds of ML and AR components, including Bayesian, neural nets, and logic programs.
CLARA aims to create a theory-driven algorithmic, highly reusable, scalable foundation for high assurance plus broad applicability, useful for many crucial defense and commercial realms which may include, but is not limited to:
- Kill web, supply chain & logistics, and wargaming
- Autonomous and command & control
- Medical, financial, and legal
- Science and tech design"
Please broaden your curriculum for freshmen and sophomores asap so that your students not only learn how to produce tools based on physics, chemistry, and biology, but also learn the fundamental capabilities and limitations of the humans who will use these tools, capabilities and limitations identified by the human sciences -- sociology, anthropology, psychology, economics, etc.
Stop converting the most brilliant minds of our youngest generations into cyclops, one-eyed monsters whose tunnel vision makes them incapable of perceiving the complex, three dimensional relationships between the tools they develop and the human users of these tools.
Readers should recall that young Dario's fiery Anti-war OpEd (pdf) back in the spring 2002 semester had challenged his intended readers, his fellow students at Caltech, to recognize their responsibility as the nation's future leaders in science and technology, as followers in the footsteps of "citizen-scientists" Linus Pauling and Richard Feynman.
But let's skip to Dr. Feynman's most famous act of public service: his discovery of what really caused the space shuttle Challenger to breakup. The editor asked Claude, Anthropic's chatbot, for a description. Here are he most relevant parts of Claude's summary:
"The Space Shuttle Challenger broke apart 73 seconds after launch on January 28, 1986, killing all seven crew members. President Reagan appointed the Rogers Commission to investigate, and Richard Feynman was one of the commissioners.
Feynman quickly became frustrated with the commission’s pace and its deference to NASA. He conducted his own investigation, talking directly to engineers at Morton Thiokol, the contractor that built the solid rocket boosters, and to lower-level NASA engineers.
On a televised hearing he demonstrated in ten seconds, with a glass of ice water, what NASA had failed to acknowledge: the O-rings couldn’t seal properly in cold temperatures. That’s what killed the crew.
But his deeper contribution was documenting a systematic disconnect between NASA’s engineers and its management. The engineers estimated the probability of shuttle failure at roughly 1 in 100. Management claimed it was 1 in 100,000 — a thousand times safer than the engineers believed.
The engineers who knew the shuttle was dangerous were not being heard by the managers who decided whether to launch. Feynman concluded: “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.“
The contract stipulated that Anthropic's autonomous agent software could not be used autonomously. The software could only be used to recommend drone and missile launches against an adversary. The final decision to launch the drones/missiles had to be made by a human handler.
. Amodei insisted with cyclopian certainty that there was literally only one possible interpretation of this restriction. The handler had to read and understand the agent's recommendation that was, like all precise machine learning patterns, based on many factors.
"Pauling joined Einstein’s Emergency Committee of Atomic Scientists in 1946 — the year after Hiroshima. That’s when his activism began. Throughout the 1950s he marshaled evidence, denounced nuclear testing, publicly debated Edward Teller, circulated petitions, led demonstrations, gave innumerable speeches, and wrote a book. The US State Department revoked his passport in 1952 for his activism. He was called a communist sympathizer. Life magazine called his Nobel Peace Prize “A Weird Insult from Norway.” He was effectively forced out of Caltech.
In 1957-1958 he and his wife collected 11,021 signatures from scientists in 49 countries and presented the petition to the United Nations.
The Partial Nuclear Test Ban Treaty came into force on October 10, 1963 — the same day Pauling was awarded the Nobel Peace Prize.
His activism did not reemerge until 2024 when he volunteered Anthropic's technical support to federal agencies during the Biden administration. Pauling's seventeen years vs. Amodei's one semester (2002) plus about 14 months (November 2024 to March 2025).
- Secretary Hegseth's marching orders
Secretary Hegseth would not have inserted the agents into live battlefield operations unless it was judged to be highly successful during the extensive simulation tests.
Nor would he have taken the human operators out of the loop.
He would probably have given orders to the battlefield handlers to review the agent's recommendations. Then they should feel free to reject the recommendations if they determined that the agent's recommendations were incorrect.
-- For example, human handlers would check the viability the agent's recommendations. Handlers might notice that the agents recommended using more drones than the handlers' units currently possessed or the agent did not include the handlers' recent estimates of the number of noncombatants within firing range. Such mishaps might occur because the handlers had not updated these parameters in the data fed to the agents. The handlers would resubmit the corrected parameters so the agents could revise their recommendations. - Murphy’s Law
The Pentagon's operators of the drones and missiles were not just there to approve the agent's recommendations. They were also there as human backups in case the agents went offline due to hardware, network failures or whatever mishap. The Pentagon always has backups for its operations; it even has backups for its backups for its most critical ops.
If the data was still available, the handlers would focus on the four factors they understood and make their own recommendations. Their recommendations would not be as effective as the agents, but doing nothing is always an option in real time operations, an option that can lead to disastrous results. - More decision factors, less human understanding
It was highly likely that Anthropic's autonomous agents would be far more effective than the Pentagon's current human controlled tech because the agents considered far more factors in its decisions.
But the maximum number of factors that humans could consider at the same time was estimated to be no more than 9, as was reported "The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information", George A. Miller, Classics in the History of Psychology, 1956.
More recently, that rough estimated maximum was reduced from 9 to 4 factors as reported in "The magical number 4 in short-term memory: a reconsideration of mental storage capacity", N. Cowan, Pub Med/NIH, February, 2001
This point is crucial. If Dr. Amodei's expectation required humans to process more than 4 factors at a time in real time, then he was requiring human handlers to exceed human capabilities. This was an impossible expectation, so his so-called contract specification was a fraud. The contract was invalid. Had the Pentagon known that this was his expectation, the Pentagon would not have signed the contract.
- Dr. Amodei seems to think that this situation is akin to a worker who has a colleague who is a much better writer. The colleague can write a page of the clearest prose in less than an hour, whereas the worker might require a day or two to produce something that isn't as well written. But the worker is be able to read and understand the colleague's clearly written prose in a few minutes.
So too, an agent can identify the many factors that best identify enemy targets plus non-combatants in an area and how best to respond to the enemy's threats. Identifying the factors is the hard part. Surely the human handler will be able to understand the agent's recommended best response within a few moments, right? Absolutely wrong if the human handler must understand a recommendation in real time that involves more than 4 factors. Dr. Amodei is either ignorant of Miller's historic finding and Cowan's lower, more accurate assessment .. or he has no respect for psychological research.












No comments:
Post a Comment
Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.