Last update: Monday 3/11/24
Welcome to our 3Mar24 TL;DR summary of the past week's top AI stories on our "Useful AI News" page. Unfortunately the editor of this blog only had time to produce a TL;DR and podcast for only one top story ➡ Elon Musk Sues OpenAI and Sam Altman ... in two parts.
The first part considers Musk's goal in funding the establishment of OpenAI -- the development of AI via open source code available to everyone; the failure of this strategy; and the likelihood that Musk will lose his lawsuit. The second part briefly sketches a high-powered revision of this strategy that replaces Musk with DARPA (Defense Advanced Research Projects Agency) as the sponsor of a dual use initiative.
A. TL;DR ... story in past week ...
Elon Musk Sues OpenAI and Sam Altman
Elon Musk, a co-founder of Open-AI, is suing OpenAI and Sam Altman because he claims that OpenAI's partnership with Microsoft violates OpenAI's "Founding Agreement". This agreement was made by three people -- Elon Musk, Sam Altman, and Greg Brockman -- when OpenAi was formed in 2015. Brockman became Chairman of OpenAi; Altman became its CEO; and Musk contributed $44 million dollars plus his time and valuable connections to the startup.
Although Musk's complaint has been reported by all of the major media, the editor of this blog believes that his readers will gain far greater insight into the immediate and longer term consequences of this lawsuit by reading a few paragraphs from a few primary sources .
Muks's complaint
The first source is the complaint that Musk filed with the Superior Court of California, County of San Francisco on 2/29/24 that requested a jury trial. A pdf copy can be found ➡ HERE.
The following quotes from this document convey the essence of Musk's complaint. Note that "the three" referred to in the first line of paragraph 24 refers to Musk, Altman, and Brockman.
24. In March 2023, "Together with Mr. Brockman, the three agreed that this new lab: (a) would be a nonprofit developing AGI for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits; and (b) would be open-source, balancing only countervailing safety considerations, and would not keep its technology closed and secret for proprietary commercial reasons (The “Founding Agreement”). Reflecting the Founding Agreement, Mr. Musk named this new AI lab “OpenAI,” which would compete with, and serve as a vital counterbalance to, Google/DeepMind in the race for AGI, but would do so to benefit humanity, not the shareholders of a private, for-profit company (much less one of the largest technology companies in the world)"
Is GPT-4 really an AGI algorithm? Musk's complaint goes on to assert that Microsoft's own researchers have concluded that it is:
31. "Furthermore, on information and belief, GPT-4 is an AGI algorithm, and hence expressly outside the scope of Microsoft’s September 2020 exclusive license with OpenAI. In this regard, Microsoft’s own researchers have publicly stated that, “[g]iven the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system."
Microsoft Research
Here's a link to a pdf copy of the Microsoft Research report to which Musk referred in his complaint:
- "Sparks of Artificial General Intelligence", Sebastien Bubeck, Varun Chandrasekaran, et al., Microsoft Research, 4/13/23
Readers whose time is strictly limited should merely consider the four examples in the document's "Figure 1.1: Preliminary examples of GPT-4’s capabilities in language, vision, coding, and mathematics"
Readers who have more tune should skim the rest of this long document for many more examples of stunning emergent cognitive skills that a pre-release version of GPT-4 displayed before the final guidelines imposed limits on the skills of the released final version. (Readers are reminded that "emergent" skills are capabilities that a model displays that it was not programmed to display.)
Probable court decision
Musk will probably lose this civil case.
- First, his assertion that GPT-4 was an AGI was flatly contradicted by the same experts in the same report that Musk citied to support his allegation. The "Sparks" in the title of the report from Microsoft Research reflects the authors' concerns that the pre-release version of GPT-4 did not display all of the cognitive skills that might be expected in an AGI system, i.e., that it was only "an early (yet still incomplete) version of an artificial general intelligence (AGI) system."
- Second, his filing does not include a copy of the “Founding Agreement” or a link to such a formal document. If Altman claims that Musk misunderstood whatever agreement had been made, the court would be facing a "He said, he said" situation.
- Third, does Musk have what lawyers call "standing" that gives him the right to file suit against Altman and OpenAI, i.e., how was Musk harmed by OpenAI's partnership with Microsoft?
- Fourth, given that Microsoft had to invest $13 billion into OpenAI within three years to reap the success of GPT-4, it seems likely that Musk's original $44 million investment was an order of magnitude too small to support a comparably successful outcome, even if it was just a down-payment with more payments to come within three or four years.
But even if, somehow, the court decides in Musk's favor, it might compensate him by requiring OpenAI to refund his $44 million dollar investment plus a hefty penalty. Let's think really big here; let's suppose the court decides that Open AI and Altman together must provide Musk with one billion dollars.
After going through time consuming motions of objections and appeals, OpenAI and Altman would probably pay the billion dollars, then get on with their lucrative multi-billion partnership with Microsoft. Where would Altman and OpenAI find one billion dollars to protect their lucrative partnership with Microsoft? ... Hmmmmmm ... Can you think of a lucrative three trillion dollar corporation that might be willing to fund a billion dollar payment that would preserve OpenAI's lucrative partnership with Microsoft? ... Hmmmmmm
Longer term consequences
The longer term consequences of Musk's lawsuit, no matter whether he wins or loses, are far more consequential because he filed his suit within the context of massive shifts in two underlying conceptual frameworks:
- The increasing appetite for substantial regulation of Big Tech in the US, in the UK, and in the EU ... and
- The growing consensus among genAI (generative AI) experts that AGI is likely to emerge much sooner than previously predicted, i.e., not twenty years from now, but within the next five to ten years.
The unexpected emergence of AGI much sooner than previously anticipated means that the speed with which we are developing AGi now greatly exceeds the speed with which we can develop effective policies for regulating genAI technology
- Do any readers of this blog over 12 years of age really believe that the U.S. elected officials can formugate effective regulations for any highly impactful new technology, such as genAI, in less than five years? The EU might be able to develop effective policies faster than the U.S. or the U.K., but EU policies are unlikely to be adopted quickly by the (disunited) U.S. (Congress) or the (Brexit) UK.
Nevertheless, extensive public discussions of Musk's lawsuit in the media during the coming weeks or months will provide substantial opportunities for our policymakers to consider the implications of his primary concern that for-profit companies cannot be trusted to develop AGI in the public interest.
- For-profit Big Tech companies are likely to exaggerate the potential value of their current AI technology, but minimize or even hide their tech's vulnerabilities because honest disclosures would depress future sales.
- For-profit Big Tech companies are also unlikely to recommend strong regulations when consulted by legislators ... unless their recommendations provide such high bars of entry into the market that they substantially inhibit the participation of smaller potential competitors ... and unless their recommendations don't impose substantial inhibitions on Big Tech's current AI technology.
Back in 2015, Musk didn't trust Google, Big Tech's AI leader at the time. Being a conservative, he didn't give serious consideration to the imposition of regulations. Instead, he funded the establishment of a non-profit developer of AI that would compete with Google and thereby act as a check on Google's efforts to dominate the evolution of this powerful technology. His efforts failed. Ironically, OpenAI was not captured by Google, but by Microsoft, a Big Tech giant that was running a distant third to Google back in 2015, behind Amazon.
Musk's strategy failed, but had it succeeded, OpenAI would have become a highly prized source of insightful recommendations for our legislators. Could a stable, well funded, non-profit competitor to Big Tech be created today? This paragraph marks the end of this week's TL;DR. The next section of this blog note provides a brief sketch of a more effective new competitor, but one that requires another major shift in our underlying conceptual framework.
End of TL;DR
... TL;DR Post Script ...
High-powered 2024 revision of Musk's AI strategy
The weaponization of genAI
Demis Hassabis, CEO of Google DeepMind, dedicated his career to transforming AI into tools that scientists could use to address mankind's biggest challenges. Kevin Scott, Microsoft's CTO, sees genAI chatbots as great levelers that enable ordinary people who have no AI expertise to be able to use AI as a tool to increase their workplace productivity and/or to identify ways to live more satisfying lives.
For some reason, the biggest cloud over both ambitions has been determined to be the low probability event that AGI, the most advanced form of genAI, might spontaneously evolve into an entity that was no longer a tool for humans, but an existential threat to all mankind.
But before genAI can generate AGI, there is an undeniably high probability, nay, a certainty, that the most powerful genAI will be weaponized, i.e., converted into weapons that nation states or non-state entities can use to demolish other nation states or non-state entities. Why? Because that's what we humans usually do: we turn powerful technologies into powerful weapons that we turn on our adversaries.
- Do any readers of this blog seriously doubt that North Korea, China, and Russia are already laying out roadmaps for developing genAI weapons that can be used against the U.S.? And does anybody seriously doubt that the Pentagon is quietly framing the development of genAI-based weapons that could be directed against North Korea, China, and Russia?
Skeptics are referred to a recent article in the Washington Post:
-- "Pentagon explores military uses of large language models", Eva Dou, Nitasha Tiku and Gerrit De Vynck, Washington Post, 2/20/24 - Some readers may recall the "fire sale" that was the focal event of the 2007 Bruce Willis "Die Hard 4" movie. What's a "fire sale"? Please watch this brief YouTube clip to find out. How could a "fire sale" be triggered? Back then, via lots of pseudo-tech movie double talk. But today? GPTs, baby, high powered G ...P ... Ts.
Unfortunately, legislators responsible for funding the development of new high powered weapons that will destroy our adversaries will receive the same inappropriate self-serving responses from Big Tech as were the legislators seeking to formulate regulations that would prevent the development of an AGI that might destroy all of humanity. That's why another big shift in our underlying conceptual framework is required.
Dual use technologies
Many decades ago, DARPA -- the Defense Advanced Research Projects Agency in the U.S. Department of Defense -- recognized that our defense contractors were sometimes far less cost-effective in developing new applications of new technologies than were the companies in our economy's civilian sector, even when the new technologies had obvious applications in the defense sector. This insight was reinforced by DARPA's identification of many examples of things like "$800 toilet seats" that had been designed by defense contractors.
The first correction in purchasing strategy was straight-forward: Buy cost-effective products from the civilian sector, then "ruggedize" them and "militarize" them to make them useful in the defense sector. But that's the low hanging fruit
Now what about new technologies that had high potential value in the civilian sector and in the defense sector, but the venture capitalists (VCs) in the civilian sector regarded them as too risky, so the VCs declined to invest invest in their development. No low hanging fruit. If the defense contractors were tasked with developing these risky new technologies, they would probably produce more "$800 toilet seats". So what to do? What to do?
Being one of the cleverest agencies in the U.S. government, no, being one of the cleverest agencies in the galaxy, DARPA came up with a clever new strategy. It awarded substantial contracts to private sector firms to develop new technologies in their earliest phases, when the risks of failure were highest. It continued to fund the best private developers until the risks of failure in the next phase were low enough to attract investments from VCs. Then DARPA stepped back and let the VCs compete for the chance to fund the developers' final phases. When the products based on the new technology went on the civilian market, the Department of Defense bought the inexpensive new low hanging fruits, then ruggedized them and militarized them to make them useful tools for our troops.
- Readers unfamiliar with DARPA's achievements may not know that DARPA funded the pre-competitive, high risk development of the Internet, robots, self-driving cars, high powered workstations, GPS, high definition flat panel displays, parallel processing, computer mice, high performance computing (a/k/a supercomputers), etc, etc, etc
So much for context. The final paragraphs of this note sketch an updated and, hopefully, more effective version of Musk's failed strategy.
The Open Source Generative AI Development Consortium (OSGAIDC)
- Members
DARPA will convene a consortium of leading U.S. universities whose faculty include the most esteemed AI experts in the U.S. - Mission
The mission of the consortium will be the development of generative AI via open-source software that is published and made available to the public. None of the consortium's activities or outputs will be classified - Duration
Five years. - Funding
-- DARPA will provide $20 billion per year to the consortium for five years.
-- Member universities whose faculty take full or partial leave from their current teaching responsibilities will receive funds from the consortium to hire additional faculty to assume the teaching responsibilities of their on-leave faculty members - Faculty, support personnel, and other resources
-- The consortium will pay the on-leave faculty members no more than five times the hourly rates they are paid by their universities ... This "pay raise" will provide incentives for their participation in the consortium's activities, rather than serving as highly paid AI consultants for Big Tech firms
-- The consortium's full-time non-faculty techs and other support personnel will be paid competitive market rates for their services
-- Reference: "Silicon Valley is pricing academics out of AI research", Naomi Nix, Cat Zakrzewski and Gerrit De Vynck, Washington Post, 3/10/24 - Advisory responsibilities
From time to time DARPA may require participating faculty members to share their expertise with legislators and other key federal decision makers. - Acquisition of scarce non-personnel resources via the Defense Production Act
Given DARPA's goal to facilitate the development of powerful open source genAI software that has dual use, i.e. civilian and defense applications, the consortium's success will have a profound positive impact on our national security. DARPA will therefore advise the administration to invoke the Defense Production Act to secure scarce resources for the consortium at (negotiated) below market rates, e.g., computing time to run models on expensive chips in the cloud.
B. Top story in past week ...
- Public Policy
"Elon Musk Sues OpenAI and Sam Altman for Violating the Company’s Principles", Adam Satariano, Cade Metz and Tripp Mickle, NY Times, 3/1/24 ***
-- A copy of Musk's electronically filed complaint can be found HERE
-- This story also covered by Financial Times, Wired, Bloomberg, Engadget, Washington Post, The Verge, Wall Street Journal, Reuters, BBC, TechCrunch, VentureBeat, NY Times #2
No comments:
Post a Comment
Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.