Wednesday, May 31, 2023

Ezra Klein, "The Matrix", and Big Tech's current efforts to trivialize LLMs

Last update: 5/31/23 
Once again, the editor of this blog calls his readers' attention to some cogent observations about artificial intelligence (AI) from Ezra Klein, his favorite public intellectual. Mr. Klein's observations appeared in a recent NY Times op ed ==> "Beyond the ‘Matrix’ Theory of the Mind", Ezra Klein, NY Times, 5/28/23. As the editor suggested in a previous note on this blog, "Ezra Klein: the public intellectual as AI critic", Klein is brilliant and well informed; but he is not an AI expert, so his incisive comments illuminate the limits of what can be understood about large language models (LLMs) and other AI technologies by non-experts.

Klein's critique
Klein's op ed doubts that ChatGPT, Bard, and other chatbots based on LLMs will increase the productivity of their users. The first part of his piece reminds readers that the Internet was also hyped as a productivity booster, but such gains were seldom realized. Indeed email, messaging, and other Internet social media have become notorious time sucks. Way back in the 1960s Herbert Simon, one of the founding fathers of AI, observed that when information becomes plentiful, attention becomes the scarce resource.

Turning to the super hyped LLMs, Klein disputes their productivity benefits because they embody flawed notions that gathering more information faster than ever would make us more productive. Readers who have seen any of the  "Matrix" movies will recall that the heroes in those features acquired new knowledge instantly via injections. In real life one has to read and re-read and contemplate new material, a time consuming process, in order integrate new information into our personal knowledge bases. Ironically, the energetic Big Tech promoters of chatbots based on complex neural networks ignored the complexity of the real neural networks in the brains of their human users.

Dazzling first impressions ... followed by rueful second thoughts
Now let's give Klein's ironic screw a few more turns. When ChatGPT running GPT-3.5 was introduced in the fall of 2022, we were impressed; but when the underlying LLM was upgraded to  GPT-4, we were dazzled because it seemed to know everything about everything ... until we learned that it sometimes rendered incorrect responses to questions, emitted biased responses, infringed on individual privacy, violated copyrights, and hallucinated from time to time. Beyond these shortfalls, it was easily hacked by spurious prompts. 

Worst of all we learned that many of the most respected experts in the field thought there was at least a 10 percent chance that LLMs were on a development path that someday, much sooner than anyone had previously expected, we might lose our control of these synthetic minds; indeed, they might pose an existential threat to mankind.
Across the land, a rising chorus demanded that LLM development be subjected to government regulation. Big Tech seemed to support this demand (sometimes) ... Nevertheless, Big Tech accelerated the pace of its development. Indeed, in May 2023, first Google, then Microsoft rolled out splashy announcements at their annual conferences of upgrades for just about all of their software products and services, upgrades that featured glitzy new capabilities enabled by the insertion of LLM technology.  But neither announced significant breakthroughs in resolving the problems that had triggered second thoughts, then strident demands for government regulation
  • Google I/O 2023 conference on 5/10/23 
    -- Overviews of all announcements provided by GizmodoTechCrunchWired
    -- AI announcements were covered by MashableGizmodo
    -- "Google launches PaLM 2, its next-gen large language model", Frederic Lardinois,  TechCrunch, 5/10/23
    -- "Google jumps into the AI coding assistant fray with Codey and Studio Bot", Sanuel Axon, Ars Technica, 5/10/23

  • Microsoft Build 2023 conference 5/24/34  
    -- Keynote address = EngadgetThe Verge, 
    -- Fabric, a new data and analytics platform = TechCrunch
    -- Plugins for AI apps = TechCrunch
    -- Copilot for Windows 11 = Ars TechnicaThe Verge 

No disruptive black swans here ... yet
So why did Microsoft and Google produce such blatant displays of their strong commitment to LLMs in a context of rising public concerns? The editor of this blog suggests two reasons: their expectations of huge increases in short term profits and their hopes that widespread distribution of mundane upgrades would dampen their users' fears of LLMs and thereby decrease the public's demands for regulation.
  • Regardless of the profound reservations of the most respected AI experts, the market thinks that AI is the purest possible gold, the strongest indicator being the sudden rise to a trillion dollar valuation of Nvidia, the largest supplier of graphics processing units (GPUs), essential chips in the most powerful AI systems.
    -- See "Welcome to the trillion-dollar club, Nvidia", Alex Wilhelm, TechCrunch, 5/30/23This story also covered by GizmodoCNBC 

  • The upgrades were "mundane" in the sense that they were good enough to encourage most users to try them out, but they were not black swans. Users could implement the upgrades as slowly as they chose without fear of being overwhelmed by competitors who opted for faster adoption schedules. Users who double and triple checked the first drafts produced so quickly by these upgrades could use the edited drafts with confidence that they were not jeopardizing the viability of their operations. This was not the stuff of human extinction; so there was no need to rush towards restrictive regulations.
____________________________________
Links to related notes on this blog:  

No comments:

Post a Comment

Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.