Sunday, January 21, 2024

Sam Altman (AI's Oracle at Davos) and AI smartphones ... TL;DR 21Jan24

Last update: Sunday 1/21/24
Welcome to our 21Jan
24 TL;DR summary about the past week's top AI stories on our "Useful AI News" page  

1) AI's Oracle at Davos, and (2) Samsung's new AI smartphones


TL;DR link  HERE
A. TL;DR ... Top 2 stories in past week  ...

1) Thus spake the AI Oracle at Davos, Sam Altman
Sam Altman, the renowned CEO of Open AI, was interviewed at Bloomberg House in Davos while he attended the annual global conference of the wise and wealthy. A link to the video of the full 30 minute interview is embedded in the Bloomberg article referenced on this page's headline section (below). We have only noted three of Altman's many oracular proclamations in this summary. 

The first expresses his disappointment that the New York Times had filed a massive copyright violation suit against Open AI and Microsoft last week:
"There is this belief held by some people that you need all my training data and my training data is so valuable"
Altman's comments suggest that he thinks that it can't be wrong to steal data from the New York Times as long as he steals enough data from everybody else so that OpenAI's models would be just about as effective without the NY Times data. 

He would apply this same logic to every other publisher that threatened to sue OpenAI. In other words, under his leadership, OpenAI will not stop stealing data until so many publishers threaten to sue OpenAI simultaneously that OpenAi won't be able to create effective language models with whatever data remains undefended

His position is clearly unethical; but we will have to wait for the courts to decide whether it is also illegal. This situation has ample precedent. For example, in the nineteenth century, employers forced their employees to work under abusive conditions. Employee abuse only ceased when workers banded together in unions whose strikes forced the employers to engage in collective bargaining. Further restraints were imposed on employers by legislators when unions became politically active.

Altman's next comment provides an implicit recognition that, going forward, OpenAI might be able to produce effective models that were trained on a substantially smaller amount of data.
"A lot of our research is how do we learn more from smaller amounts of very high quality data,”
Our research?, No. Microsoft's research; Microsoft's research; Microsoft's research; not OpenAI's research. Microsoft's research has led to its recent publication that proclaims "the surprising power of small language models". This recent finding was not credible a few years ago when OpenAi started training its GPT large language models, without permission, on copyrighted data from the NY Times, Reddit, etc, etc, etc.

Here is, perhaps, his most telling comment, transcribed from the video of the interview. If you watch the video, you will see Altman making this heart-felt proclamation at 
3 minutes, 45 seconds:
 "I believe that America is going to be fine, no matter what happens in this election" 
Hmmmmm ... If Sam Altman were a chatbot, most of us -- whether Democrats, Republicans, or Independents -- would dismiss his assessment as a political hallucination ... but he isn't a chatbot ... A Stanford dropout, Altman received no academic training in AI as an undergraduate nor did he subsequently acquire any hands-on work experience as an AI developer. So what is he?

Although his official title is "CEO", chief executive officer, his de facto main role at OpenAI is "CSO", chief sales officer, because he is a brilliant salesman. Recall that his salesmanship landed a $13 billion dollar contract from Microsoft to develop a series of expensive GPT models. So his comment about the forthcoming election is just part of his sales pitch to the next president, whoever that turns out to be. 

As a retired tech, the editor of this blog still recalls the following lessons learned long ago. Techs know the product; salesmen know how to sell the product. If the product's quality falls below the quality of competitors' products, techs will change jobs; salesmen will merely change their sales pitches. 

Anyone who really wants to know how a technology is most likely to evolve should ask the best techs, not the best salesmen. For AI we should trust world renowned AI experts, like Google's Demis Hassabis, or Facebook's Yann LeCun, or OpenAi's Ilya Sutskever. Indeed, back in November 2023, Sutskever voted to fire Altman because he didn't trust him. So why should anyone else?

None of Altman's technical forecasts should be taken as plausible possibilities, but merely as a salesman's sales-pitch-of-the-moment. Like all great salesmen, Altman is always pitching.

2) Samsung's new AI smartphones
Samsung hosted its most recent Galaxy Unpacked event on January 17,  at the the SAP Center in San Jose, California. Most of the company's announcements related to its new S24 top of the line smartphones:  Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. But from the focused perspective of this AI news page, its most important announcement was Galaxy AI, a set of AI-powered tools designed to work alongside the trio of next-generation phones. 

More specifically, in the context of the AI boom launched last year as a result of the unexpected power of large language models, e.g., GPT-4, and the recently recognized surprising power of small language models, the editor of this blog wanted to find out which language models would be accessed by the Galaxy Ai tools and for which purposes.

Here are some key AI-driven capabilities noted in Samsung's press release:
  • New Ways To Connect:
    -- Live Translate real-time, two-way call translations within the native call app.
    -- Chat Assist immediately translates the message, allowing users to easily text with friends in the language they’re most comfortable speaking.
    -- Transcript Assist enables users to create and share short summaries of recorded conversations 
    -- Notes Assist automatically formats and summarizes your notes with bullet points.

  • Google's Circle to Search which allows users to perform Google searches on anything they circle without switching apps 
Which language models do the new phones use and when? The press release only indicates that Google is Samsung's primary AI partner, but here's a quote from Wired:
"There's a mix of on-device (via Gemini Nano) and cloud-based AI smarts (via Gemini Pro) on the Galaxy S24 series, though Samsung leans more heavily on the latter."
Indeed, the Samsung press release only specifies that Circle to Search uses the on device model as the default option, but users can also decide to use the cloud version. Why does an edge device, like a smartphone, especially a high end smartphone, make such intensive use of the cloud?.

This puzzling design suggests that Samsung, like Google and Amazon at the end of 2023, has rushed its new AI features to the market somewhat prematurely in order to challenge the de facto success of ChatGPT's large model as quickly as possible and thereby allay any concerns of Samsung's shareholders.

Finally, full disclosure requires that the editor of this blog declare that he has multiple Macs and only Macs in his home office, but he has been pleased by the performance of his Samsung S23 Ultra smartphone. However, he will switch to an iPhone in a nanosecond if Apple releases an iPhone that makes intensive use of small language models that are self-contained on his phone. Why? Because powerful small models won't need to share his healthcare and/or other personal AI queries with a corporate cloud.


B. Top stories in past week ...
  1. LLM News
    "OpenAI Doesn’t Want to Train on New York Times Data After Lawsuit, Altman Says", Brad Stone and Jake Rudnitsky, Bloomberg, 1/18/24 ***-

    -- This story also covered by CNN,

  2. LLM News
    "Here are the key differences between the Samsung Galaxy S24 phones", Sheena Vasani, The Verge, 1/18/24
    -- This story also covered by Mashable,  CNETThe Verge, WiredBloomberg ... and Samsung

This page contains links to responses by Google's Bard chatbot running Gemini Pro to 12 questions that should be asked more frequently, but aren't. As consequence, too many readily understood AI terms have become meaningless buzzwords in the media.

No comments:

Post a Comment

Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.