Saturday, June 1, 2024

OpenAI's new safety team run by Sam Altman ... Ex-OpenAI Safety Leader Joins Anthropic ... Sam Altman fired in Nov 2023 for lying ... OpenAI Says Russia and China Used Its A.I. in Covert Campaigns ... TL;DR 3June24 summary

Last update: Monday 6/3/24 
Welcome to our 3June
24 TL;DR summary of the past week's top AI stories on our "Useful AI News" page   1) OpenAI's new safety team run by Sam Altman, (2) Ex-OpenAI Safety Leader Joins Anthropic, (3) Sam Altman was fired in Nov 2023 for 'outright lying, and (4) OpenAI Says Russia and China Used Its A.I. in covert campaigns

No podcast this week 
TL;DR link   HERE

A. TL;DR summary of Top 4 stories 
If our readers scan the top four events of the past week listed in section B (below), they will perceive that these four events form a highly interrelated prelude to the mega event that will occur next week: Apple's long awaited late entry into the generative AI race at its 2024 developers conference. Indeed, Apple is the last of the five biggest Big Tech firms to offer genAI services, the last behind Microsoft and its partner OpenAI, Google, Facebook, and Amazon (via its large investment in Anthropic)

Privacy
Like many other observers, the editor of this blog anticipates that Apple’s entry into the genAI race will place considerable emphasis on securing the privacy of its users' personal data because privacy is a bedrock value that Apple guarantees for its users.

Safety
But Apple also places high value on safety, not just safe as in no physical danger, but safe as in "worry free", as in its motto: "It just works". Apple doesn't want to produce results that users have to double check for errors and hallucinations. 

Which brings us back to this week's top four events that address the current unsafe genAI services offered by the other four Big Tech contenders currently in the genAI race, especially OpenAI and its partner Microsoft. They all offer genAI services that require their users to double check all results for errors and hallucinations. 

A recent TL;DR on this blog traced the exodus of OpenAI's founding staff -- a stream of resignations driven by CEO Sam Altman's insufficient concern for safety -- that began in 2022 when ten founders resigned to form Anthropic, a company whose website proclaims that "Anthropic is an AI safety and research company. We build reliable, interpretable, and steerable AI systems." The exodus culminated in the resignation of Dr. Ilya Sutskever, one of the members of OpenAI's board who voted to fire Altman in November 2023.

This week Dr. Helen Toner, another member of the OpenAI board who voted to fire Altman, declared that the board fired Altman for "lying". She claims that Altman did not tell the board that he was releasing ChatGPT in November 2022. The board found out from Twitter which prevented the board from making any assessments of ChatGPT's safety.
-- Also note the reservations of Mira Murati, OpenAI's CTO, 
"Key OpenAI Executive Played a Pivotal Role in Sam Altman’s Ouster", Mike Isaac, Tripp Mickle and Cade Metz, NY Times, 3/7/24

This week John Leike, one of the few remaining founding staff members, left OpenAI to take a position at .... you guessed it, Anthropic. 

Sensing the widespread unease caused by all of these resignations, CEO Sam Altman appointed a new OpenAI safety committee ... with himself as chairman ... wow ... we are so reassured.

Then OpenAI announced that it had thwarted political misinformation campaigns launched by China and Russia, campaigns that had misused ChatGPT ... but campaigns that OpenAI admits had been ineffective. So the remnants of OpenAI's safety staff were still effective enough to thwart an ineffective misuse of ChatGPT... wow ...we are so underwhelmed. 

Safety vs. Dazzle
In summary, our previous TL;DR reminded us that OpenAI, the world's leading proponent of generative AI and the world's undisputed generative AI pace setter, was led by a CEO who was far more committed to dazzle than to safety. To be sure, he liked to talk about AI safety ... to congressman, Senators, and world leaders. But inside OpenAI, he suppressed it, and his suppression drove safety-oriented staff to resign. Given a choice between making ChatGPT safer and making it more dazzling, OpenAI’s CEO chose more dazzle, time and time again. 

How could Sam Altman have chosen otherwise? He dropped out of Stanford University before he received any formal academic training in machine learning. His subsequent work experience provided him with no on the job training in the development of generative AI systems. But his employment experience honed his natural talents as a salesman. So he became the world's leading super salesman for generative AI services. As every good salesman knows, more dazzle sells more product because more dazzle is more fun ... until it produces more errors and more hallucinations. 

A safety conscious OpenAI
What would OpenAI look like if Sam Altman had been more committed to safety than to dazzle? It would have probably looked like ... Anthropic. Here's a partial list of some recent Anthropic publications that have contributed to making genAI safer. A comparable list of publications from OpenAI does not exist, despite the fact that OpenAI has received three to four times the funding received by Anthropic. 
  • "Anthropic researchers find that AI models can be trained to deceive", Emilia David, TechCrunch, 1/13/24 
    -- This story also covered by 
    VentureBeat  ... and Anthropic 

  • "Many-shot jailbreaking", Anthropic, 4/2/24 
    -- This story also covered by video on TechCrunch

  • "Measuring the Persuasiveness of Language Models", Anthropic, 4/9/24

  • "[Gen] AI Is a Black Box. Anthropic Figured Out a Way to Look Inside", Steven Levy, Wired, 5/22/24
    -- This story also covered by Fast CompanyNY TimesTime, ... and Anthropic

B. Top 4 stories in past week ...  
  1. OpenAI
    "OpenAI has a new safety team — it’s run by Sam Altman", Emma Roth, The Verge, 5/29/24 *** 
    -- This story also covered by GizmodoTechCrunchMashableEngadgetBloombergForbes, ... and OpenAI

  2. OpenAI
    "Ex-OpenAI Safety Leader Leike to Join Rival Anthropic", Rachel Metz, Bloomberg, 5/28/24 *** 
    -- This story also covered by TechCrunchThe VergeForbesMashable

  3. OpenAI
    "OpenAI CEO Sam Altman was fired for 'outright lying,' says former board member", Amanda Yeo, Mashable, 5/29/24 *** 
    -- 
    This story also covered by BloombergGizmodoThe VergeFortune, and The TED AI Show (podcast interview with Helen Toner ...  The first 20 minutes of this podcast are a "must hear")

    -- On the other hand, "Paul Graham claims Sam Altman wasn’t fired from Y Combinator", Kyle Wiggers, TechCrunch, 5/30/24

  4. OpenAI
    "OpenAI Says Russia and China Used Its A.I. in Covert Campaigns", Cade Metz, NY Times, 5/30/24
    -- This story also covered by ForbesAxiosFinancial TimesBloombergWashington Post, ... and OpenAI 30May24, OpenAI 17Feb24

No comments:

Post a Comment

Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.