Monday, April 29, 2024

Big Tech's quarterly reports ... Microsoft's New Push Into Smaller AI Systems ... Rabbit’s AI Assistant Is Here ...TL;DR summary 29Apr24

Last update: Monday 4/29/24 
Welcome to our 29Apr
24 TL;DR summary of the past week's top AI stories on our "Useful AI News" page   1) Big Tech's quarterly reports ... Q1 (Jan, Feb, Mar 2024), (2) Microsoft Makes a New Push Into Smaller AI Systems, and (3) Rabbit’s AI Assistant Is Here
No podcast this week 

TL;DR link HERE

A. TL;DR summary of Top 3 stories 


1) Big Tech's quarterly reports ... Q1 (Jan Feb, Mar 2024)
In 2023, the sudden emergence of generative AI drove the world's biggest tech companies -- Alphabet, Amazon, Apple, Meta, and Microsoft -- to make massive capital investments in expensive GPU chips for their own systems and/or for access to expensive GPUs provided by cloud services. 

Therefore, at the end of the first three months of 2024, the big question was which, if any, of the big five have been able to extract substantial revenue and profits from their massive investments. The January-February-March 2024 quarterly reports from Alphabet, Meta, and Microsoft were released this week. Amazon and Apple will release their reports for these first three months next week.

The good news for stockholders was that all three companies were highly profitable in the first quarter. Indeed, Alphabet was so profitable that it announced the issue of its first dividend ever.

Unfortunately, the the news for Meta's stockholders was mixed. Evidently its soaring profits had little or no connection to its previous investments in generative AI. So when CEO Mark Zuckerberg provided "guidance" that Meta would continue to make massive investments in GenAI in the next three months, the value of its stock collapsed.

In stark contrast, Microsoft's stockholders not only learned that Microsoft was more was profitable, but that its profits had strong links to its AI investments. Subscriptions to Microsoft's Copilot were substantially higher in this quarter than  during the previous quarter. And the substantial increases in its income from its Azure cloud operations were directly linked to (1) increased use of its own AI services, (2) increased use of the AI services provided by its partner OpenAI, and (3) increased use of AI services provided by other companies that were hosted in Microsoft's Azure cloud.

While not as strong as the links between Microsoft's profits and its AI investments, Alphabet's increased profitability had strong enough links to AI based increases in revenue from its cloud services to cause Alphabet's market capitalization to surge past the two trillion dollar mark.


2) Microsoft Makes a New Push Into Smaller AI Systems
In late January 2024, Microsoft announced the formation of a small language model (SLM) team that would exploit its discovery that small models had "surprising power", e.g., its Phi-2 model. Its announcement declared that:
  • Microsoft had formed a team of some of its best techs to develop small language models (SLMs).
  • The team would report directly to Kevin Scott, Microsoft's chief technology officer (CTO), the same visionary top level manager who managed Microsoft's partnership with OpenAI that resulted in the successful transfer and transformation of OpenAI's large language model technology into copilots for Microsoft's office productivity apps.
On 4/23/24, Microsoft's blog announced its production of Phi-3: 

"Starting today, Phi-3-mini, a 3.8B language model is available on Microsoft Azure AI StudioHugging Face, and Ollama

    • Phi-3-mini is available in two context-length variants—4K and 128K tokens. It is the first model in its class to support a context window of up to 128K tokens, with little impact on quality.
    • It is instruction-tuned, meaning that it’s trained to follow different types of instructions reflecting how people normally communicate. This ensures the model is ready to use out-of-the-box." 

... In the coming weeks, additional models will be added to Phi-3 family to offer customers even more flexibility across the quality-cost curve. Phi-3-small (7B) and Phi-3-medium (14B) will be available in the Azure AI model catalog and other model gardens shortly."

"Phi-3 models significantly outperform language models of the same and larger sizes on key benchmarks (see benchmark numbers below, higher is better). Phi-3-mini does better than models twice its size, and Phi-3-small and Phi-3-medium outperform much larger models, including GPT-3.5T." 

A copy of Phi-3 mini's impressive benchmarks in a pdf file can be found  HERE 

What's not included in this announcement is a set of instructions for how ordinary users can gain access to Phi-3 mini, the only version currently available. Given that Claude 3 is the chatbot family to beat nowadays for readers of this blog, i.e., for readers who are not AI experts, Phi-3 should be compared to Claude 3 Haiku, the smallest Claude 3 model. Claude 3 Haiku, can be accessed by following a couple of simple steps
  • Direct your browser to claude.ai, then set up an account
  • Once you have an account, merely click the "down arrow" on the left side of the prompt input box to select "Claude 3 Haiku" from the menu, a menu that also includes Claude-3 Sonnet and Claude-3 Opus.
Another Microsoft's blog note that provides step-by-step instructions for how one gains access to Phi-3 can be found  HERE. The first two steps are quoted below:
  • "Getting Started with Phi-3 and Azure AI Studio: A Step-by-Step Guide

    Step 1: Set Up Your Environment
    Create an Azure account if you haven't already. 

    Install the Azure CLI or use a cloud-based IDE like Visual Studio Code.
    Familiarize yourself with Python, as it's the primary language used in Azure AI Studio.


    Step 2: Setup your Azure Azure AI Studio

    Go to the Azure AI Studio and follow the installation instructions for your preferred platform (Windows, macOS, or Linux).
    Launch the Azure AI Studio and sign in with your Azure account credentials
    ."

No thanks. For the time being, the editor of this blog will stick with Claude 3 Haiku.

So who might benefit from Phi-3? As reported by The Information in the following article, Big Tech has encountered substantial resistance from its largest customers to making large purchases of AI services because AI services based on large language models have been too expensive to be cost-effective. 
  • "Amazon, Google Quietly Tamp Down Generative AI Expectations", Aaron Holmes and Anissa Gardizy, The Information, 3/12/24
But this judgement might change if cheaper services provided by smaller models that satisfied all but their customers' most demanding AI requirements were added to the mix. Larger customers would have the tech support staff required to follow the installation process described in the blog note (above), then configure specialized apps running on the models that met their employer's particular 
requirements.

On the other hand, for individual professionals (like the editor of this blog) and small business operations, a large potential benefit of small models would be privacy -- if the models were small enough to run on their smartphones. 

At this point, most small models offered by Big Tech run in the cloud. But if users could run them on their phones, they would have no worries that their concerns about health, finances, or personal relationships, as expressed in their prompts, would ever be shared with advertisers or government agencies. To paraphrase an old maxim about Las Vegas: "What happens on my phone should stay on my phone." To date, only one member of Big Tech has demonstrated a deep and pervasive commitment to respecting its customers' privacy ... Apple. That's why so many of us are still waiting for Apple's belated entry into the chatbot race.


3) r1 -- Rabbit’s AI Assistant Is Here
As CNET describes it:
  •  "The Rabbit R1 is a pocket-size computer that's almost like a phone, but isn't ... The Rabbit R1 isn't meant to replace your smartphone. Rather, it's an AI-powered virtual assistant that wants to do certain things better than your phone."
An earlier CNET description declared: 
  • "The device's creator says it can learn to operate apps on your behalf. Imagine requesting an Uber or ordering takeout by simply pressing a button and speaking your request."
Here's what the r1 looks like. Wired describes what you see as follows: "On the right of the screen is a scroll wheel for interacting with the interface, and above it is a camera that can swivel to the front, back, or into the inner casing of the device for privacy. On the right edge is a button, which is the main way to select anything on the screen ... But the goal for the R1 is more or less to replace your apps. Instead of hunting for an icon, just push the button and ask the R1 to handle something ... The idea is to speak and then compute. No need for apps—the computer will just understand."

The r1 only costs $199 and there is no monthly subscription fee, unlike the Humane AI pin. It currently manages four apps: Spotify, Uber, DoorDash, and Midjourney. 

All of this sounds promising, but what did tech reviewers think after they began to use their r1's?
  • Engadget -- "I remain skeptical about the usefulness of AI devices, but, in large part due to its price and ability to work with third-party apps at launch, Rabbit has already succeeded in making me feel like Alice entering Wonderland."

  • Mashable -- "As the old saying goes, "If something is too good to be true, it probably is." Jesse Lyu, CEO of Rabbit R1, keeps boasting that the Rabbit R1 is only $199 and is subscription free. However, there's no way in hell it can be subscription free for long. Once the hype dies down and nerds like me wipe the shelves clean, what's next? To put it succinctly, how does Rabbit R1 intend to make money?"

  • ZDNet -- "Naturally, a good part of the AI hardware experience right now is unpolished, needing a group of users enthusiastic enough to pay to be beta testers. I'm always against buying a product for its promise of future updates; nothing is guaranteed. If, however, you're interested in what the future of AI gadgets could look like, the Rabbit R1 may be the most compelling option on the market today."

B. Top 3 stories in past week ...  
  1. Misc
    Big Tech quarterly reports ... Q1 (Jan Feb, Mar 2024) *** 
    -- "Alphabet Beats Revenue Estimates as AI Fuels Cloud Growth", Julia Love and Davey Alba, Bloomberg, 4/26/24
    ---- Alphabet Q1 also covered by ReutersThe InformationNY Times ... and Alphabet

    -- "Microsoft Earnings Jump on AI Demand", Tom Dotan, Wall Street Journal, 4/26/24
    ---- Microsoft Q1 also covered by The InformationNY Times, ... and Microsoft

    -- "Meta plunges 16% on weak revenue guidance even as first-quarter results top estimates", Ashley Capoot, CNBC, 4/24/24 ...
    ---- Meta Q1 also covered by BloombergBusiness InsiderNY Times ... and Meta

    -- Amazon will publish its Q1 2024 report on 4/30/24 
    -- Apple will publish its Q2 2024 (Jan, Feb, Mar) repoort on 5/2/24

  2. SLM News
    "Microsoft Makes a New Push Into Smaller A.I. Systems", Karen Weise and Cade Metz, NY Times, 4/23/24 *** 
    -- This story also covered by The VergeEngadgetArs TechnicaZDNet ... and AI Revolution (video)    .. and Microsoft  Microsoft benchmarks (pdf)
     
  3. Misc
    "Rabbit’s AI Assistant Is Here. And Soon a Camera Wearable Will Be Too", Julian Chokkattu, 
    Wired, 4/24/24 *** 
    -- This story also covered by The VergeCNETTechCrunch, Mashable, Engadget, ZDNet,  ... and 
    Rabbit (homepage) ➡ Rabbit (video) 

C. Dozen Basic AI FAQs  HERE
This page contains links to responses by Google's Bard chatbot running Gemini Pro to 12 questions that should be asked more frequently, but aren't. As consequence, too many readily understood AI terms have become meaningless buzzwords in the media.

No comments:

Post a Comment

Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.