Last update: Wednesday 4/17/24
Welcome to our 14Apr24 TL;DR summary of the past week's top 3 AI stories on our "Useful AI News" page ➡ 1) OpenAI's new Robot, (2) Humane's Ai Pin, and (3) Claude's 2 min summary of Klein's 90 min interview with Dario Amodei, CEO Anthropic
A. TL;DR summary of Top 3 stories
1) OpenAI's new robot called "Figure 1"
Editor's apology: This story is over three weeks old. I ignored it when it first appeared because:
- All robots are controlled by some kind of AI model (mostly pre-GenAI)
- An AGI model does not need to be embedded in a humanoid form (head, two arms, two legs, etc,) to be an AGI and, conversely, putting GPT in a humanoid robot does not make GPT an AGI. Nevertheless, I should have called OpenAI's clumsy humanoid robot, Figure1 (video) , to my reader's attention if only to acknowledge that so many other publications seemed to think that a sleek, bright-eyed, stainless steel, humanoid was inherently more intelligent than a model that resided on a clump of expensive Nvidia chips on a server in the cloud
- A clumsy Figure1 + limited GPT does not equal AGI (Artificial General Intelligence), the Holy Grail, nor progress thereunto. This physically challenged robot is just another attention grabbing, publicity stunt from OpenAI's "hey gang, let's put on a show" leadership. Let's hope that Google can resist the urge to to play "me too" ... yet again.
FYI, here are three YouTube videos of some useful, truly nimble robots from Boston Dynamics:
- Spot for Safety and Incident Response
Spot has four legs (like a dog), walks quickly up and down stairs/inclines, over rough terrain; picks itself up if it falls down, open doors for itself and other robots, performs autonomous tasks, etc. - Inside the Lab: Taking Atlas From Sim to Scaffold
Atlas is a powerful humanoid robot that runs, jumps, does somersaults ... and useful work in a warehouse environment lifting and tossing objects of various weights and shapes from here to there. - Do You Love Me?
Spot, Atlas, and other friends dance nimbly to a Motown classic. - By an odd coincidence, Boston Dynamics released a new video yesterday (April 16th) that features an upgraded version of Atlas that makes the Atlas in the old videos look like a hopeless klutz ... progress. Nevertheless Spot, Atlas, or any other agile current robot + any 2024/2025 versions of GPT, Gemini, or Claude will not equal AGI nor represent a substantial step towards AGI.
2) Humane Ai Pin reviews
The AI pin was designed to be a wearable replacement for a smart phone. It's a lightweight device that's worn like a badge on a user's shirt or other garment. It's held in place by a magnet that's worn underneath the garment. Other key features include:
- Accepts vocal requests from with users and provide vocal responces. Can also project information and images onto the palm of.a user's hand.
- Makes telephone calls and interacts with an AI Language model and other support programs in the cloud.
- Identifies objects in the the user's surroundings, and take pictures and short videos
- Produces real-time translations into other languages.
- Costs $699 plus a $24 dollar per month subscription fee for its information services.
The pin has lots of whiz-bang features, but how good is it? Here are a few comments from recent reviews in some prominent publications:
- Wired -- "Accuracy of answers is mixed (and it's slow). Projector is annoying to interact with and is impossible to see in daylight. Poor photos and videos in low light."
- The Verge -- "For $699 and $24 a month, this wearable computer promises to free you from your smartphone. There’s only one problem: it just doesn’t work."
- Engadget -- "The Humane AI Pin is the solution to none of technology's problems"
- NY Times -- "The $700 Ai Pin, funded by OpenAI’s Sam Altman and Microsoft, can be helpful — until it struggles with tasks like doing math and crafting sandwich recipes."
- Bloomberg -- "At best, it will be as an extra accessory for tech and AI fanatics willing to spend a pretty penny.
In short, the pin's version 1.0 won't cut it. Perhaps version 2.0 or 3.0 will fulfill its developers' "visionary" ambitions.
3) Ezra Klein interviews Dario Amodei, CEO Anthropic
Here's a quote from Klein's introduction to the podcast
Here's a quote from Klein's introduction to the podcast
- "Back in 2018, Dario Amodei worked at OpenAI. And looking at one of its first A.I. models, he wondered: What would happen as you fed an artificial intelligence more and more data? He and his colleagues decided to study it, and they found that the A.I. didn’t just get better with more data; it got better exponentially. The curve of the A.I.’s capabilities rose slowly at first and then shot up like a hockey stick."
"Amodei is now the chief executive of his own A.I. company, Anthropic, which recently released Claude 3 — considered by many to be the strongest A.I. model available. And he still believes A.I. is on an exponential growth curve, following principles known as scaling laws. And he thinks we’re on the steep part of the climb right now"
Here's a link to the paper that described the exponential findings of the OpenAI staff in 2020. The paper had eight authors, all of whom worked at OpenAI at the time their paper was submitted. They left to form Anthropic shortly thereafter.
- "Scaling Laws for Neural Language Models", Jared Kaplan, Sam McCandlish ... and Dario Amodei, pdf -- arxiv (Cornell University, 1/23/2020.
Last week, the editor of this blog declined to produce a TL;DR summary of Klein's interview with Nilay Patel, the editor of The Verge. The interview was really a fast moving, 80 minute conversation between two old friends, both of whom who were computer savvy. Their conversation moved so adroitly from one topic to the next that the range of topics they covered was too wide to be reduced to a summary that could be read in ten minutes or so.
This week, the editor again declines to produce a TL;DR summary of Klein's interview with Dario Amadei, the CEO of Anthropic. This time the insurmountable hurdles were (1) the even greater length of the podcast, 90 minutes, and (2) the complexity of the questions posed by Klein and the corresponding complexity of Amodei's answers. Indeed, some of Klein's questions were more than a page long, and some of Amodei's answers were even longer.
Then it occurred to the editor that this was a good opportunity to demonstrate the capacity of Claude, Anthropic's chatbot, to provide appropriate summaries of long, complex discussions. (Note: Claude was running on Claude 3 Opus, Anthropic's largest LLM).
-- Note: Klein's usual end-of-program banter has been deleted
-- Note: the truncated transcript will open in a new tab.
An edited version of the truncated transcript was created that surrounds Klein's questions with labels "START Klein ... END Klein" ...
and surrounds Amodei's answers labels with "START Amodei ... END Amodei" labels
The labelled comments were then appended to the following instructions to Claude
to create a very long prompt
to create a very long prompt
- "Ezra Klein interviewed Dario Amodei
Here is a transcript of the interview.
Klein's comments begin with a "START KLEIN ..." Label
and end with an "END Klein ..." Label.
Similarly, Amodei's comments begin with a "START AMODEI ..." Label
and end with aN "END AMODEI ..." Label
Please provide a concise summary of the text between each "START Klein ..." and "END KLEIN ..." Label, and between each "START AMODEI ..." And "END AMODEI ... " label
Assume that you are a member of the staff of a very busy U.S. senator who prefers the shortest possible summaries.
Please include the label of each summary in your response.
When you finish summarizing the separate comments, please provide a separate overall summary of all of your summaries".
These instructions were followed by a copy of the contents of the labelled transcript of the 90 minute interview.
-- Note: the prompt will open in a new tab.
To view the summaries provided by Claude ➡ Click HERE
-- Note: Claude's summaries will open in a new tab
Claude's TL;DR, i.e., its summary of its summaries
As the reader will see, Claude's summary of its summaries only contains four short paragraphs, so it can be read in less than two minutes.
As the reader will see, Claude's summary of its summaries only contains four short paragraphs, so it can be read in less than two minutes.
- "Dario Amodei, CEO of Anthropic, discusses the exponential pace of AI progress, which he believes will lead to transformative breakthroughs in the next few years. He describes Anthropic's research into AI persuasiveness, noting the technology's potential for personalized mass influence. Amodei is uncertain how to reconcile interpretability with ever-increasing model complexity, but holds out hope that AI systems can develop an internal 'web of truth.'"
- "Anthropic has a 'responsible scaling plan' that defines progressive AI safety levels, which Amodei estimates we could reach the highest of by 2028, potentially compelling industry cooperation to mitigate risks. However, the compute requirements of advanced AI pose challenges around semiconductor supply chains, geopolitical strategy, energy consumption, and climate change."
- "Amodei is uncomfortable with the concentration of power in private AI companies as capabilities grow, but is unsure how to apportion control to the public interest. He believes current AI training is fair use of intellectual property, but acknowledges the broader economic disruption and need for new social contracts as AI transforms labor."
- "As a hypothetical parent, Amodei struggles to advise how to prepare children for the AI-driven future he anticipates. While he expects AI to redefine work in unpredictable ways, he worries it could short-circuit crucial stages of human cognitive development. Amidst this uncertainty, the exponential growth of AI capabilities continues unabated."
Note: hypothetical parent??? ... Hmmmmmm ... Claude's reference to Amodei as a "hypothetical parent" is a comical, but understandable linguistic glitch, the kind of idiomatic error that a human might make if English was the human's second or third language. When Klein asked for advice as to how he should educate his own kids to cope with generative AI, Amodei replied that he himself had no children. Then he said that he would have difficulty imagining how to cope with this challenge. Amodei's exact words were "The very short answer is, I don’t know, and I have no idea, but we have to try anyway, right? " ... which led Claude to call Amodei a "hypothetical parent" ... 😀
Does Claude's 2 minute summary of its summaries cover all of the key points made by Klein and Amodei in their 90 minute chat? Of course not. But it does convey enough sense of the scope and depth of their conversation to enable most readers to decide whether they might want to read Claude's more detailed summary of each comment made by Klein and Amodei or even listen to the entire 90 minute podcast. Indeed, that's the purpose of a TL;DR -- to help readers decide whether something really is too long to read or listen to.
B. Top 3 stories in past week ...
- OpenAI
"Here's Why OpenAI's NEW Robot Is DIFFERENT", AE Uncovered (YouTube video), 3/13/24 ***
-- This story also covered by articles with videos on PC Magazine, Gizmodo, Mashable, Daily Beast ... and OpenAI - OpenAI
"Humane Ai Pin reviews: Top 5 common complaints about the 'smartphone killer'", Kimberly Gedeon, Mashable, 4/12/24 ***
-- This story also covered by Wired, The Verge, Engadget, NY Times, Bloomberg, - Misc
Part 3 of a trio NY Times audio interviews from Ezra Klein about generative AI.
-- "What if Dario Amodei is Right About A.I.", Guest = Dario Amodei (CEO of Anthropic), Ezra Klein (NY Times podcast 93 minutes + transcript), 4/12/24 ***
This page contains links to responses by Google's Bard chatbot running Gemini Pro to 12 questions that should be asked more frequently, but aren't. As consequence, too many readily understood AI terms have become meaningless buzzwords in the media.
No comments:
Post a Comment
Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.