Last update: Monday 9/16/24
Welcome to our TL;DR 15Sep24 summary by ChatGPT + podcast of the past week's top 2 stories (plus an editor's note) on our "Useful AI News" page. As usual, the podcast audio is a recording of the Descript app reading ChatGPT's summary using an AI imitation of the voice of the editor of this blog ... 1) New Strawberry details, (2) Fei-fei Li’s startup ... and an editor's note about 3) SearchGPT vs ChatGPT. Audio podcast ... 6.5 min
If audio fails to start, or gets stuck, try reloading the page
A. ChatGPT's TL;DR summary of Top 2 stories plus an editor’s note
1) "New Details on OpenAI’s Strawberry" a/k/a "GPT-o1"
-- Stephanie Palazzolo, Erin Woo, and Amir Efrati, The Information, 9/10/24 ***
-- This story also covered by Wired, The Verge, TechCrunch, VentureBeat, Reuters, Computrworld, Gizmodo ... and OpenAI
PROMPT
Please provide a concise summary of what OpenAI has revealed about its new GPT-o1 model
RESPONSE:
OpenAI's newly introduced GPT-o1 model, part of the "Strawberry" project, marks a significant leap forward in AI's reasoning capabilities. Unlike traditional models, which generate responses based primarily on pattern recognition, GPT-o1 leverages reinforcement learning to tackle complex, multi-step problems by reasoning through each task step-by-step. This technique, known as "chain-of-thought reasoning," allows the model to break down challenges and solve them more methodically, similar to human thinking.
The model excels in areas such as mathematics, coding, and scientific reasoning, achieving remarkable success on tests like the International Mathematics Olympiad, where it solved 83% of problems—compared to just 13% for its predecessor, GPT-4o. However, it is slower to respond as it requires extra time to "think" through its answers. Additionally, while it outperforms previous models in reasoning tasks, it doesn't yet handle multimodal inputs (e.g., images or audio) and is not designed for real-time web browsing.
Currently, GPT-o1 is available to ChatGPT Plus users, with broader access planned in the future. A smaller, faster variant called o1-mini is also available, catering to different use cases with lower costs but reduced computational capabilities.
Though GPT-o1 offers significant improvements, issues such as hallucinations (where the AI generates incorrect information) are not completely resolved, highlighting ongoing challenges in AI development.
Sources: Wired, TechCrunch, WinBuzzer.
Prompt: Who is Fei-Fei Li, and why is she called the "godmother of AI"?
Response: Professor Fei-Fei Li at Stanford is often referred to as the "godmother of AI" due to her pioneering work in the field, particularly her contributions to computer vision and the creation of ImageNet, a large-scale dataset that was foundational in advancing machine learning and artificial intelligence. ImageNet played a crucial role in the development of deep learning algorithms, especially for image recognition tasks. Her leadership and vision have had a significant impact on AI research, earning her widespread recognition in the field.
Prompt: Please provide a succinct overview of her new startup
Response: Fei-Fei Li's new startup, World Labs, is focused on advancing spatial intelligence—developing AI technology that can better understand and interact with the three-dimensional physical world. Unlike current AI models that primarily work with flat images and text, World Labs aims to create AI that can grasp the complexities of 3D environments. This could have applications in fields such as augmented reality (AR), virtual reality (VR), and robotics.
The startup recently raised $230 million in funding, with backing from major investors like Andreessen Horowitz, Intel Capital, and Nvidia's NVentures. World Labs is working on creating large "world models" using a mix of synthetic and real-world data, pushing the boundaries of AI by teaching it to comprehend spatial relationships in the world
Sources: Yahoo Finance, Tech Startups
3) Editor's note about SearchGPT vs ChatGPT
... mostly written by the editor, not by ChatGPT
The editor's personal experience suggests that SearchGPT, like Google and Bing, can search almost every site on the Internet; whereas ChatGPT can only search the sites of OpenAI's training partners, i.e., the sites whose owners have signed contracts with OpenAI that grant OpenAI the right to train OpenAI's models on their sites.
- Until recently, ChatGPT could not correctly respond to queries that involved events that occurred after the latest update of its underlying models.
- Recently, OpenAI added search capability to ChatGPT, thereby enabling the chatbot to base its responses on events that had occurred after the latest update of its underlying models. However, the reader should note that OpenAI usually retrains its models to incorporate new information that it has been obtained from any source ... which led to an increasing number of copyright challenges ... which led OpenAi to negotiate training partnerships with an increasing number of publishers
- At this point OpenAI has established a wide range of partnerships with publishers, so wide that it evidently feels confident that ChatGPt can provide satisfactory responses to a very high percentage of its users' queries with responses that are only based on the information on its partners' sites.
The editor of this blog first detected this limitation when he insisted that ChatGPT base its responses on some of his favorite tech publications. The chatbot lamely replied that it was having "difficulty" gaining access to those sites, sites whose URLS worked for the editor. When the editor asked if the chatbot's "difficulty" was do to it being a chatbot, the chatbot agreed
The editor then asked ChatGPT to identify all of OpenAI's training partners. The chatbot provided the following partial, but very extensive list. Here's the chatbot's verbatim response:
"OpenAI has made several significant deals with media companies to access their content for training its AI models. Some of the notable partners include:
1. The Associated Press (AP): OpenAI has access to AP’s news archive dating back to 1985, allowing the use of this vast content for training models.
2. News Corp: This includes publications like The Wall Street Journal, MarketWatch, Barron’s, and The Times in the UK. The deal is reportedly worth over $250 million.
3. The Financial Times (FT): OpenAI gained access to FT’s library for text and article citations as part of a multi-year partnership.
4. Axel Springer: OpenAI has access to content from this German publisher, which includes Politico, Business Insider, and Bild.
5. Vox Media: The company, which owns The Verge, New York Magazine, and SB Nation, has licensed content for OpenAI to use in ChatGPT responses.
6. The Atlantic: This deal allows OpenAI to use content from the magazine for training, while also collaborating on AI-powered tools for journalism.
7. Dotdash Meredith: OpenAI partners with this publisher to use content from major brands like People, Better Homes & Gardens, and Investopedia.
8. Le Monde and Prisa Media: These partnerships provide access to major European news outlets, including El País."
Bottom line: Some, perhaps most readers may find that they don't need SearchGPT for most of their queries. More significantly, many users may find that they don't need Google or Bing for a substantial percentage of their queries. Indeed, as OpenAI expands its partnerships, the distinction between search engines and chatbots will have less and less meaning. ChatGPT and its most nimble competitors will be able to answer almost all questions about almost everything.
Our readers may recall that one of Microsoft’s primary objectives in funding OpenAI’s development of generative AI was to capture a substantial share of Google’s lucrative search market. If OpenAI captures a substantial share on its own, then it will be able to fund the massive data centers that it is proposing without huge investments from Big Tech corporations like Microsoft, Apple, and Nvidia, investments that are likely to be blocked by antitrust regulators in the US and the UK, but most vigorously by regulators in the EU.
No comments:
Post a Comment
Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.