Tuesday, February 6, 2024

Big Tech's investor conference calls ... TL;DR and podcast 5Feb24

Last update: Wednesday 2/7/24 
Welcome to our 5Jan
24 TL;DR summary + podcast about the past week's top AI stories on our "Useful AI News" page  GenAI related comments by CEOs during investor conference calls about the latest quarterly reports from Google, Apple, Amazon, and Microsoft 

Click link for podcast (opens in new tab) 
Click the "start" button the when the podcast page is loaded
... 
 If audio fails to start, or gets stuck, try reloading the page
TL;DR link  HERE
A. TL;DR ... Top 4 stories in past week  ...
Our top stories last week were about the Big Tech Five minus Facebook, i.e., about Amazon, Apple, Google, and Microsoft. To be specific, our stories related to GenAI comments made by their CEOs during investors' conference calls that discussed their last quarterly earnings reports. 

The challenge to the Big Tech CEOs was straight-forward: If a company was making billion dollar investments in generative AI, its CEO had to (1) identify a large market segment wherein the CEO's company was dominant, and (2) convince potential investors that the company had a strategy that would enable it to use generative AI to derive substantially larger revenue streams from that segment.

At this point, Facebook has invested billions upon billions into the virtual reality of the "metaverse", but not in generative AI; so its massive increase in profits in the last quarter was not related to its celebrity chatbots or to any other applications of GenAI.

1) Google's large, but diappointing earnings
According to the Wall Street Journal, "Sales in Google’s cloud-computing business climbed 26%, recovering somewhat following a disappointing third quarter that some investors viewed as a sign of weak demand for the company’s AI services."

Similarly, the NY Times commented that "Alphabet, Google’s parent company, on Tuesday reported search revenue and a profit margin for its latest quarter that fell short of Wall Street’s expectations, in a sign that growth in its flagship business and recent layoffs, intended to cut costs, were not enough to offset its growing investment in artificial intelligence."

Google dominates search and has a substantial share of cloud computing; however its search and mapping services are free to end users, but paid for by advertisers. Unfortunately, it was unable to convince investors that its increases in cloud sales were powered by increases in AI-driven search.

2) Apple GenAI coming "later this year"
Apple has not shared any data about its investments in generative AI. So CEO Tim Cook's following statements, as quoted by The Verge, command our attention:
"Our M.O., if you will, has always been to to do work and then talk about work, and not to get out in front of ourselves. And so we’re going to hold that to this as well. But we have got some things that we’re incredibly excited about, that we’ll be talking about later this year ... Let me just say that I think there’s a huge opportunity for Apple with generative AI and with AI, without getting into many more details or getting out ahead of myself,” Cook said to conclude the call.
Many questions come to mind: Why has Apple taken so long to get to the starting line of this race? Why is Cook so confident that Apple will start offering genAI services soon? And what changed between March 2023 (the launch of GPT-4) when the other four Big Five Techs got into the race and now?

Lack of data encourages speculation. Throughout most of 2023, generative AI has been dominated by large language models (LLMs), models that needed to know everything about everything, models that were therefore trained on all accessible digital information, even when the developers had questionable rights to access such data, even when the data entailed violations of individual privacy, even when the models required extensive "prompting" for users to obtain the desired results, and even when developers had to remind users to check all results for possible errors and hallucinations. All of this sounds clumsy, rude, and very un-Apple, a company whose highly paid designers work overtime to ensure the smoothest possible user experience wherein "everything just works" because of "Apple magic".

Apple's iCloud is housed in Google's cloud. According to The Infomation, Apple is the largest corporate client for Google’s storage. If Apple pursued LLMs, it would have to pay the premium prices Google charges for its graphic processing units (GPUs), the costs that make LLMs so expensive to operate, no matter whose cloud houses them.

However, as noted in reports in various tech media since the summer of 2023, many observers have identified the surprising power of small language models (SLMs), i.e., small  focused models that exhibit the same kinds of emergent cognitive skills as large language models (LLMs), but don't know "everything".

So we have to speculate that Apple will finally launch its GenAI initiatives later this year because it will mainly deploy relatively inexpensive small language models housed on its users' iPhones and Macs, rather than expensive LLMs housed on Google's cloud. But Apple will probably make comparatively larger investments in the design of elegant user interfaces for its chatbots, "elegant" as in simple, but powerful interfaces that will not require users to bumble through prompt engineering.

3) Amazon's new chatbot
A few weeks ago, Amazon announced a new chatbot. called "Q", but nobody cared. What market segment dominated by Amazon would provide a user base for Q? Amazon offered no answers. Indeed, what market segment does Amazon dominate, besides the cloud? Duh ... online shopping? Correct-a-mundo!!! 

This week Amazon offered a new chatbot called "Rufus" that will be its version of a Microsoft copilot. Whereas Microsoft's copilots help users use Microsoft's office productivity apps more effectively, Rufus will help users go shopping on Amazon more efficiently. According to the NY Times:
"Customers can ask the tool, Rufus, product questions directly in the search bar of the company’s mobile app, Amazon said in a blog post. The A.I. will then provide answers in a conversational tone. The examples provided in the announcement included comparing different kinds of coffee makers, recommendations for gifts and a follow-up question about the durability of running shoes."

Before Rufus, most customers used Google to search for information about the kinds of products they were interested in buying; then they went to Amazon to buy the most cost-effective version of that product in their preferred size, color, etc.  The NY Times does not indicate whether Rufus will be based on a large language model or a small language model. But common sense suggests that powerful small models would be the most profitable option for Amazon

4) Microsoft's current LLM success finances its future SLM success
We assume that most readers of this blog understand the secret of Microsoft's success in 2023:
  • Microsoft dominates the world-wide market for office productivity apps
  • So Microsoft converted OpenAI's ChatGPT into copilots for each of its productivity apps and charged a monthly subscription fee for each member of the enterprise who used a copilot for a productivity app.
Unfortunately, it cost Microsoft billions of dollars to underwrite OpenAI's development of ChatGPT based on large language models (GPT-3.5, GPT-4), and billions more to operate its copilots. Fortunately Microsoft's own researchers documented the surprising power of small language models by the end of 2023. Indeed, during the conferencer call, The Information reported that Microsoft's CEO, Satya Nadella:
"... also touted the success of Microsoft’s “small language models” that are a less costly option for customers, compared to OpenAI’s large language models. Nadella said AT&T and Thomson Reuters were testing out the SLMs on Azure [Microsoft's cloud]. As we reported last week, the company formed a new group dedicated to SLMs."
Nadella's successful presentation led to yet another rise in the price of Microsoft's stock

Correction: The TL;DR (and podcast) asserted that all four top stories were based on CEOs' comments during conference calls. This was not true for Amazon's story. The NY Times based its report on an announcement by Amazon. No other major tech publication covered this announcement by the time the editor composed the TL;DR.


B. Top 4 stories in past week ...
  1. Google
    "Google’s Ad Sales Fall Short of Wall Street’s Lofty Expectations", Miles Kruppa, Wall Street Journal, 1/30/24 
    -- This story also covered by NY TimesForbes, and 
    The Information, 1/31/24 ... Link downloads the full paywalled article for 3 free reads.

  2. Other Models
    "Tim Cook confirms Apple’s generative AI features are coming ‘later this year’", Chris Welch, The Verge, 2/1/24 *** 
    -- This story also covered by 
    CNBCZDNetComputerworld

  3. "Amazon Enters Chatbot Fray With Shopping Tool", Karen WeiseNY Times, 2/3/24  
     
  4. Microsoft
    "Microsoft Dishes on AI Revenue; Google CEO Says ‘Agents’ Are Coming", Aaron Holmes and Jon Victor, The Information, 1/31/24 ... Link downloads the full paywalled article for 3 free reads
    -- "CEO Satya Nadella also touted the success of Microsoft’s “small language models” that are a less costly option for customers, compared to OpenAI’s large language models. Nadella said AT&T and Thomson Reuters were testing out the SLMs on Azure. As we reported last week, the company formed a new group dedicated to SLMs."
    -- Note: As per its title, this article also discusses Google's earnings call
    -- Microsoft's quarterly earnings also reported by ReutersNY Times

C. Basics 
  • "Watch an A.I. Learn to Write by Reading Nothing but Shakespeare or Harry Potter or Jane Austen or Star Trek or Moby Dick", Aatish Bhatia, NY Times, 4/27/23 
  • "What Is a Large Language Model, the Tech Behind ChatGPT?", Kurt Muehmel, Data Iku, 6/7/23 
  • "Let's learn about artificial intelligence -- A series about AI, machine learning, ChatGPT, and more", Mark Wiemer, Medium, 3/21/23

  • "Textbooks Are All You Need II: phi-1.5 technical report", Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee, Microsoft Research, September 2023
  • "Phi-2: The surprising power of small language models", Mojan Javaheripi and Sébastien Bubeck , Microsoft Research, 12/12/24

  • Dozen Basic AI FAQs
    This page contains links to responses by Google's Bard chatbot to 12 questions that should be asked more frequently, but aren't. 

No comments:

Post a Comment

Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.