Saturday, June 22, 2024

Nvidia Becomes Most Valuable Public Company (briefly) ... Ilya Sutskever starts new AI company ... Anthropic has a fast new AI model ... TL;DR 24Jun24

Last update: Saturday 6/22/24 
Welcome to our 24Jun
24 TL;DR summary of the past week's top 3 stories on our "Useful AI News" page   1) Nvidia Becomes World's Most Valuable Public Company (briefly), (2) Ilya Sutskever starts new AI company, and (3) Anthropic has a fast new AI model. Where possible, our summaries will note connections between this week's events and past events.
      Nvidia HQ, Santa Clara, CA

TL;DR link  HERE

A. TL;DR summary of Top 3 stories 

1. Nvidia | 2. Sutskever | 3. Anthropic 

1) Nvidia Becomes World's Most Valuable Public Company
Why has Nvidia suddenly become the world's most valuable public company? As most readers of this blog surely know  Nvidia supplies the chips that are the most preferred by the developers of generative AI services. With the release of GPT-4 in March 2023, the demand for genAI services from these developers exploded ... which caused explosive growth in the demand for Nvidia's chips ... which caused an explosive rise in the value of Nvidia's profits ... which caused an explosive rise in the price of its stock ... which caused an explosive rise in its market capitalization = share price X number of shares = 3 trillion dollars plus ... making Nvidia more valuable than Microsoft and Apple (briefly). (Note: Nvidia share prices declined a little bit by the end of the week.)

What may be overlooked by this kind of simplistic recap is the fact that most of the demand for Nvidia's chips has come from developers trying to become the leading providers of generative services as quickly as possible, i.e., Microsoft/OpenAI, Google, Anthropic, etc, etc, etc. There has been no corresponding surge in demand for GenAI services from the developers' biggest customers, i.e., the world's largest enterprises. 

Indeed, one of this blog's top stories a few weeks ago was a survey done by Microsoft and LinkedIn that found that most knowledge workers regard GenAI as a valuable tool, but most of their employers are not convinced that the benefits are tangible enough for them to believe that GenAI is a cost-effective investment. If this "wait and see" skepticism persists, GenAI developers will stop buying more chips. Then Nvidia will see falling profits, falling share prices, and falling market value.


2) Ilya Sutskever, OpenAI’s former chief scientist, starts new AI company
Dr. Sutskever announced his retirement from OpenAI a few weeks ago. Last week he told the world that he was launching Safe Superintelligence Inc., an AI startup that would prioritize safety over ‘commercial pressures.’  

Until recently, his new venture would have been greeted by many readers of this blog as great news because Dr. Sutskever had been widely regarded as one of the "good guys" at OpenAi. A founding member of its staff, he never stopped striving to realize the development of artificial general intelligence via safer open processes, as per the "open" component of its name. He was the quiet heroic  leader of OpenAI's loyal opposition who resisted Sam Altman's commitment to developing dazzling new features, with little or no regard for safety ... until two weeks ago when Bloomberg published a stunning podcast that flatly contradicted this heroic image.

In the second episode of a five part audio podcast, Bloomberg included the following quote from an internal email Sutskever sent to OpenAI's very small staff in 2015, shortly after its founding:
  • "As we get closer to building AI, it will make sense to start being less open. The "open" in OpenAI should mean that everyone should benefit from the fruits of AI after it's built. But it's totally okay to not share the science even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes."
In short, Sutskever is a hypocrite who always regarded the "open" in OpenAi's name as a recruitment ploy. So it's reasonable to assume that the "safe" in his new Safe SuperIntelligence venture is merely an instant replay of the same ploy. It worked before ... until many of OpenAi's best staff resigned in protest in 2022 to found Anthropic, a company whose website proclaims that "Anthropic is an AI safety and research company."


3) Anthropic has a fast new AI model
By now, most of us have become accustomed to a recurring GenAI upgrade sequence. OpenAI announces a dazzling new feature that is quickly perceived as a must-have; then Microsoft adapts the innovation for Copilot; then Google quickly announces a me-too version ... that sometimes bombs out in embarrassing manners; then Anthropic brings up the rear with a version that performs better than the others on most benchmark tests. 

Along the way, many users notice that the innovation produces errors and hallucinations in all versions, but not so many in Anthropic's catchup version. For example, last week Anthropic announced Claude 3.5 Sonnet, an ugrade that scores better than the other models on benchmark tests, but still doesn't perform as many dazzling tricks as ChatGPT on GPT-4o.

There seem to be two factors that determine this invariable sequence: 
  • $$$$$
    Microsoft has invested about $13 billion in OpenAI; Microsoft has unlimited billions for its Copilot adaptations; Google has unlimited billions for its oh-too-hasty me-too copies; but Anthropic has only received about $7 billion from its investors

  • Safety
    Although Anthropic's models are not open source, there is ample reason to believe that Anthropic invests substantially greater effort to produce its lagging upgrades than its competitors. To be specific, Anthropic's thorough testing has yielded systematic results, results that Anthropic has shared with the GenAI community in many professional-level publications.

    A good example is provided by Anthropic's deeply disturbing finding that the guardrails on all GenAi models can be overwhelmed by prompts that are lonng enough to contain a sufficient number of phony examples of the model granting the user's otherwise forbidden request.
    -- "Many-shot jailbreaking", Anthropic, 4/2/24  ... This story also covered by video on TechCrunch
So Anthropic's lagging upgrades may be less prone to errors and hallucinations than its competitors' upgrades, but are they "safe enough"???


B. Top 3 stories in past week ...
  1. Misc
    "Nvidia Becomes Most Valuable Public Company, Topping Microsoft", Tripp Mickle and Joe Rennison, NY Tmes, 6/18/24 *** 
    -- This story also covered by ForbesWall Street, JournalBloomberg
    -- Nvidia's CEO has expressed concerns "about whether his biggest customers are moving fast enough to install and generate revenue from Nvidia’s chips", Anissa Gardizy and Qianer Liu, The Information, 6/18/24

  2. Misc
    "OpenAI’s former chief scientist is starting a new AI company", Emma Roth, The Verge, 6/19/24 *** 
    -- This story also covered by TechCrunchBloombergVentureBeatNY Times

  3. Other Models
    "Anthropic has a fast new AI model — and a clever new way to interact with chatbots", David Pierce, The Verge, 6/20/24 *** 
    -- This story also covered by BloombergTechCrunchWiredVentureBeatGizmodoReutersCNET,

No comments:

Post a Comment

Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.