Saturday, June 8, 2024

OpenAI Employees Warn of Risk and Retaliation, ... OpenAI Peeks Inside ChatGPT, ... Microsoft Switches Off Recall .. Bloomberg's "OpenAI Story" ... TL;DR 10Jun24 summary

Last update: Monday 6/10/24 
Welcome to our 10Jun
24 TL;DR summary of the past week's top AI stories on our "Useful AI News" page   1) OpenAI Employees Warn of a Culture of Risk and Retaliation, (2) OpenAI Offers a Peek Inside the Guts of ChatGPT, (3) Microsoft Will Switch Off Recall by Default After Security Backlash, and (4) Bloomberg's disturbing OpenAI Story


OpenAI Headquarters, San Francisco, CA
TL;DR link  HERE

A. TL;DR summary of Top 4 stories 


1) OpenAI Employees Warn of a Culture of Risk and Retaliation
Ten current and former OpenAI employees provided most of the 16 signatures to an open letter that called for greater protection of whistle blowers in AI companies. Here are a few excerpts from their letter
  • "AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.

    So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues." 
Their letter goes on to make four recommendations: (1) AI companies should neither create nor enforce agreements that prohibit criticism of AI risk; (2) AI companies should develop verifiably anonymous processes that enable current and former employees to express risk-related concerns; (3) AI companies should support a culture of open criticism of risk-related issues that allows current and former employees to share their concerns with appropriate outsiders; and (4) AI companies should not "retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed."


2) OpenAI Offers a Peek Inside the Guts of ChatGPT
  • On May 17th, OpenAI announced its decision to disband its superalignment safety committee
    -- "OpenAI’s Long-Term AI Risk Team Has Disbanded", Will Knight, Wired, 5/17/24 

  • On May 29th, OpenAI announced its formation of a new safety team, run by Sam Altman
    -- "OpenAI has a new safety team — it’s run by Sam Altman", Emma Roth, The Verge, 5/29/24

  • On June 4th, concerned current and fomer employees of OpenAI published their open letter that implicitly, but very loudly, charged OpenAI with failure to  address the risks involved in OpenAI's development of genAI in a responsible manner, as per the disscussion above

  • Two days later, on June 6th, OpenAI published a research paper that described limited but tangible  initial progress in the development of tools to understand how genAI models work
    -- 
    "OpenAI Offers a Peek Inside the Guts of ChatGPT", Will Knight, Wired, 6/6/24
Common sense suggests that the biggest risks posed by the development of genAI by OpenAI and other AI companies as fast as possible, derive from the fact that these companies are conducting their ever more massive development efforts with little or no understanding of how genAI really works. 

OpenAI's announcement of research that sheds initial light, however small, on the inner workings of genAI models should be regarded as a welcome harbinger of OpenAI's future development of genAI in a safer manner 

... except ... except for the fact that this research was conducted by the same super alignment team that OpenAi disbanded in May, a team that had been led by Ilya Sutskever and John Leike, both of whom resigned from OpenAi last week.


3) Microsoft Will Switch Off Recall by Default After Security Backlash
The Verge article provides a succinct description of the "recall" feature that Microsoft intended to run on its new Copilot Plus PCs:
  • "Recall uses local AI models to screenshot mostly everything you see or do on your computer and then give you the ability to search and retrieve anything in seconds. An explorable timeline lets you scroll through these snapshots with ease to look back on what you did on a particular day on your PC. Everything in Recall is designed to remain local and private on-device, so no data is used to train Microsoft’s AI models."
Here's a few quotes from Wired's description of the why Microsoft is suddenly recalling "Recall":
  • "After weeks of withering criticism and exposed security flaws, Microsoft has vastly scaled back its ambitions for Recall, its AI-enabled silent recording feature, and added new privacy features."

  • "The changes come amid a mounting barrage of criticism from the security and privacy community, which has described Recall—which silently stores a screenshot of the user's activity every five seconds as fodder for AI analysis—as a gift to hackers: essentially unrequested, preinstalled spyware built into new Windows computers."
Hmmmmm ... Microsoft recalling software because of security problems? Where have we heard this story before? Oh yes, in just about every context other than GenAI ... so far. Whereas Apple's MacOS and other operating systems have deep roots in open source development, Microsoft has always preferred proprietary in-house development. So Microsoft's systems have tended to be buggier and more hackable. Consider the following report about some of Microsoft's biggest recent non-genAI hacks. 
  • "Microsoft Security Breaches Rile U.S. Government Customers", Aaron Holmes, The Information, 3/15/24
    "Microsoft became the world’s biggest seller of cybersecurity software by bundling it with Office and Teams apps. But after a series of hacks exploited that software in the past year, several of Microsoft’s biggest customers are considering whether their reliance on Microsoft’s software bundle puts their security at risk."

4) Foundering, Season Five: The OpenAI Story
Most of the material covered by the 5 episodes of Bloomberg's podcast was extensively reported by tech media in November 2023, when Sam Altman was abruptly fired, then rehired shortly thereafter. However, Bloomberg's podcast contains two new revelations. The first (about 24 minutes into the second episode) is a short quote from an internal email sent by Dr. Sutskever way back in 2015, during OpenAI's first few months of operation. The quote is hypocritical and disheartening. The second revelation in the fourth episode -- only accessible to Bloomberg subscribers at this time -- is profoundly sad. 

Sutskever's email
Here's a transcription of the quote from Sutskever's email cited in the second episode:
  • "As we get closer to building AI, it will make sense to start being less open. The "open" in OpenAI should mean that everyone should benefit from the fruits of AI after it's built. But it's totally okay to not share the science even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes."
Sam Altman has become the subject of a rising tide of criticism lately. While he continues to express the noblest sentiments, his relentless preference for dazzling new features over safety makes his sentiments ring hollow. His critics, including the editor of this blog, have perceived him to be the great misleader who singlehandedly diverted OpenAI from its initial high minded objective, the pursuit of artificial general intelligence (AGI) for the good of all humanity, into a profit sharing pilot fish for Microsoft, the biggest of the Big Tech deep water predators. 

But the quote from Sutskever's email suggests that this single bad apple theory is naive. Yes, Altman is a razzle-dazzle salesman who is always looking to make a bigger, quicker buck; but he was definitely not OpenAI's only misleader.  

Four men founded OpenAI in 2015: Altman, Brockman, Musk, and Sutskever. Whenever Sutskever sent this internal email sometime in 2015, OpenAI's total staff would still have been a very small group whose biggest challenge would have been devising ways to persuade AI hotshots to join their startup, instead of taking positions at Big Tech operations that paid high salaries and provided immediate access to lots of computer resources. What to say? What to say? Sutskever's answer: Tell them about our dazzling high minded objective. Tell them that we are not trying to make a profit; we are trying to save humanity. Lots of talented, high minded techs will take lower salaries and fewer resources in exchange for higher purposes, right? In other words, OpenAI was never open ... not really.

P.S. A few years later, six high minded recruits perceived the con and left OpenAI to found Anthropic

Sam Altman's sister ...  fourth episode
The Bloomberg Web page that hosts the link to its audio podcast is titled: "Sam Altman’s Dream of AI Ending Poverty Faces a Messy Reality". Here's a quote:
  • "Altman has made universal basic income part of his personal brand, and he’s envisioned an AI future of abundance, where everyone has enough to eat and a place to live. He has called AI a technology that can “end poverty.”

    Episode four of Foundering: The OpenAI Story delves into the way that these grandiose projections collide with a messy, complicated reality. While Altman was flying around the world talking about how poverty shouldn’t exist, his sister, Annie Altman, was struggling with homelessness in Hawaii"
To be more explicit, the podcast informs us that the 39 year old-billionaire has a deeply disturbed, unskilled sister in her thirties. She lives in poverty and has had to resort to prostitution to pay her bills. The podcast also informs us that Altman supported his sister for a while, but stopped when his mother told him that he was spoiling the child by sparing the rod. 

How is it that the oh-so-wise super salesman proclaims that universal guaranteed income would end the widespread poverty that might otherwise occur if AGI took away most people's jobs? How can he expect us to believe that he really believes that universal guaranteed income would be politically acceptable to most voters in most countries when his own mother forbids him to provide a guaranteed income for her only daughter/his only sister? And why would an almost forty year old man apply this kind of cruel and clearly ineffective tough love to his sister?

If the Bloomberg podcast only devoted one or two minutes to their allegations of Altman's cruelty, one might think it prudent to suspend judgement until other media provided corroboration. But no, the podcast lasts about 22 long painful minutes. One would expect that the billionaire big brother would hire squads of high priced lawyers to threaten Bloomberg with a billion dollar slander suit if Bloomberg did not publish an immediate retraction. Instead, Altman's silence must be interpreted as his silent confirmation of Bloomberg's repulsive allegations


B. Top 4 stories in past week ...  
  1. OpenAI
    "OpenAI Employees Warn of a Culture of Risk and Retaliation", Will Knight, Wired, 6/4/24 *** 
    -- This story also covered by TechCrunchWall Street JournalBloombergVentureBeatThe VergeNY Times, ... and the open letter

  2.  Hacks
    "OpenAI Offers a Peek Inside the Guts of ChatGPT", Will Knight, Wired, 6/6/24 *** 
    -- This story also covered by an OpenAI Blog Note ... and an underlying OpenAI Research Paper (pdf) co-authored by Dr. Ilya Sutskever and Jan Leike (before they resigned from OpenAI) and others on the recently disbanded alignment team

  3. Microsoft
    "Microsoft Will Switch Off Recall by Default After Security Backlash", Andy Greenberg, Wired, 6/7/24 *** 
    -- This story also covered by EngadgetThe VergeBloomberg

  4. OpenAI
    "Foundering, Season Five: The OpenAI Story", Ellen Huet and Shawn Wen, Bloomberg podcast in 5 episodes, June 2024 *** 

No comments:

Post a Comment

Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.