For uses who have ChatGPT Plus or ChatGPT Prosubscriptions
"The Most Hyped Bot Since ChatGPT", Matteo Wong, The Atlantic, 12/9/24 ... ... This story also covered by Reuters, Marques Brownlee (video) ... and OpenAI
- Text The Atlantic
- Text OpenAI
Concise Summary of the First Article:
Matteo Wong’s Atlantic piece on Sora highlights the transition from OpenAI’s lofty AGI aspirations to hard-nosed product monetization. Sora, a video-generating AI model teased with mammoth-starring clips in February, is now finally available—but as a consumer tool rather than a scientific breakthrough. With its sleek editing features and a $200 watermark-free tier, Sora poses threats to human creativity and misinformation but promises to flood TikTok with AI-generated chaos. While OpenAI waxes poetic about AGI, Wong points out that the true goal is far more mundane: boosting profits and dominating yet another creative frontier.
Detailed Summary of the Second Article:
The OpenAI blog triumphantly announces Sora’s official launch, rolling out to ChatGPT Plus and Pro users. Sora, capable of generating 20-second videos in 1080p resolution, boasts features like “Remix,” “Blend,” and a storyboard tool for frame-by-frame input. While pitched as a step toward simulating reality and advancing AI’s understanding of the physical world, the model is riddled with caveats—it struggles with realistic physics and complex actions, and long durations are still a pipe dream.
Sora Turbo, the faster and shinier version, aims to smooth over some of these limitations. Safeguards include watermarks, metadata, and restricted uploads of people’s likenesses to mitigate the risk of abuse, like deepfakes or worse. Still, OpenAI admits the system isn’t perfect and is “letting society explore its possibilities,” which roughly translates to “good luck figuring out the ethical mess we just handed you.”
Subscriptions are tiered: Plus users get a modest allocation, while Pro subscribers can crank out higher-res videos with longer durations. A cynic might note the transparent play for more Pro upgrades, even as OpenAI wraps the launch in the language of “creativity” and “exploration.” The blog closes with an optimistic nod to storytelling potential, but beneath the PR varnish, it’s clear: Sora is less a step toward AGI and more a flashy bid for consumer dollars in the AI gold rush.
Concise Summary of the First Article (The Verge):
Apple’s latest iOS 18.2 update rolls out a suite of AI-infused features under the banner of “Apple Intelligence.” Highlights include Image Playground, a cartoony image generator; Genmoji, custom emoji-like stickers destined to plague group chats; and a much-hyped ChatGPT integration with Siri. Siri can now pass off complex questions to ChatGPT, sparing you from Siri’s usual obliviousness. Apple also throws in goodies like an “Image Wand” for Notes and daily sudoku in Apple News Plus—because who doesn’t need their AI and puzzles in one update? It’s all wrapped up with expanded language support and the obligatory promises of responsible AI.
Detailed Summary of the Second Article (CNBC):
Apple Teams Up with ChatGPT for Siri’s Glow-Up.
Apple’s latest iOS, iPadOS, and macOS updates mark a milestone for its AI ambitions, with ChatGPT integration into Siri as the crown jewel. The feature activates when Siri is too dim-witted to answer, redirecting users to GPT-4o for a brainier response (after user consent, of course). Apple promises privacy, claiming OpenAI won’t store user queries, and allows access to ChatGPT without requiring an OpenAI account. However, premium ChatGPT upgrades are available through Apple—because capitalism never sleeps.
This launch isn’t just an AI experiment; it’s a calculated marketing push for the iPhone 16 lineup, betting that flashy AI tools like ChatGPT will lure upgrades and keep Apple as the reigning monarch of consumer tech. The integration also doubles as a PR win for OpenAI, granting its model access to millions of iPhone users
Beyond Siri, the update includes Image Playground, an app for prompt-based and customizable image creation, and Image Wand, a photo-editing tool that screams “delete ex with one tap.” Apple’s strategy involves packaging these AI features as “Apple Intelligence,” a brand seemingly designed to make AI sound classy and exclusive.
For now, the system requires newer iPhones (15 and up) and a hefty download of Apple’s AI models. The update positions Apple as a player in the consumer-AI arms race, though the integration relies heavily on cloud servers—perhaps proving that even cutting-edge AI needs some handholding. Investors are watching closely, salivating over the potential for AI to drive iPhone sales into yet another upgrade cycle.
PROMPT #2
Most companies have names for their language models; so OpenAI calls their models GPT-4, GPT-4O, and so on. What are the names of the language models that Apple is using now in its Apple Intelligence?
RESPONSE:
Apple has developed a suite of foundation language models to power its AI features, collectively known as Apple Intelligence. These include a ~3 billion parameter model optimized for on-device performance and a larger server-based model designed for Private Cloud Compute. These models are designed to perform a wide range of tasks efficiently, accurately, and responsibly. They are integrated into various Apple products and services, enhancing functionalities such as Siri, text summarization, and tone adjustment. ... ARXIV
Additionally, Apple has introduced OpenELM, a family of efficient language models with an open training and inference framework, further contributing to its AI capabilities. ... Hugging Face
PROMPT #3
Here comes the text for an article. Provide a concise summary
- Text Gizmodo
Summary: Apple’s new AI feature, part of its “Apple Intelligence” suite, mis-summarized a BBC push notification headline, falsely claiming Luigi Mangione shot himself when he remains in police custody. This error occurred in the AI-powered notification summary feature introduced in iOS 18.1, which aims to condense notifications but has been criticized for inaccuracy. The BBC has lodged a formal complaint, raising concerns about misinformation and its impact on credibility.
The incident highlights the challenges of generative AI, particularly in ensuring reliable outputs. While Apple has integrated AI for features like photo editing and smarter notifications, its reliance on AI technologies like ChatGPT has drawn criticism for unpolished and error-prone results, undermining Apple’s reputation for quality. Critics argue Apple is rushing AI integration to boost iPhone sales, requiring the iPhone 15 Pro or newer for these features.
- Text Bloomberg
- Text TechCrunch
Concise Summary : ... Bloomberg
After seven months of suspense, OpenAI’s ChatGPT can finally watch video feeds in real time and have a chat about it—because nothing screams progress like a chatbot that can now tell you, “Yes, that’s a coffee cup.” Unveiled during a livestream, this new feature lets ChatGPT recognize objects via your smartphone camera, respond to on-screen activities, and even help you reply to text messages (because typing is so last century).
Rolling out first to paid Plus and Pro users, the feature will reach enterprise and educational subscribers in January. Meanwhile, the EU and other regions remain on the sidelines, presumably waiting for ChatGPT to brush up on its GDPR compliance.
This move is part of OpenAI’s ongoing push into “multimodal” capabilities—AI that can handle text, audio, images, and now video. The announcement is just one piece of OpenAI’s 12-day product launch extravaganza, which also includes a pricier ChatGPT Pro tier and a shiny new video-generation tool named Sora. Clearly, OpenAI is taking “all eyes on me” a bit too literally.
Detailed Summary: ... TechCrunch
In a move that feels like OpenAI’s attempt to keep pace with the hype train it built seven months ago, ChatGPT can now do what it demoed back in April—analyze real-time video. Yes, after an extended “technical delay” (or what we cynics might call premature boasting), Advanced Voice Mode with vision is here, and it’s ready to help you point your phone at random objects and hope for coherent responses.
The feature, rolling out this week to ChatGPT Plus, Team, and Pro subscribers, allows users to identify objects, interpret on-screen content, and even critique your geometry homework with all the authority of someone who skimmed the textbook. Just tap a few buttons—voice icon, video icon, and (if you’re feeling ambitious) the screen-sharing menu—and voilà, ChatGPT can help you understand settings menus or solve that pesky math problem (if you don’t mind the occasional “hallucination”).
But before you get too excited, a few catches: Enterprise and Edu subscribers are left waiting until January, and users in the EU, Iceland, and similar locales are stuck in limbo with no release date. Maybe OpenAI’s servers are allergic to GDPR compliance? Meanwhile, Google is trailing close behind with Project Astra, testing its own video-analyzing chatbot on Android. It seems the AI video race is heating up, though none of these competitors have cracked the code for consistent accuracy yet.
Speaking of accuracy, ChatGPT’s Advanced Voice Mode with vision recently debuted on CBS’s 60 Minutes, where it showcased its knack for kind of understanding what it sees. When quizzing Anderson Cooper on anatomy, it praised his chalkboard brain sketch as “spot on” in location but critiqued its shape as “a good start.” Clearly, ChatGPT has artistic standards. However, its foray into geometry wasn’t as smooth—it stumbled on a basic problem, proving that even AI gets math wrong when it’s showing off.
Why the delay in rolling out these features? Reportedly, OpenAI announced the tool before it was ready (bold move) and has been patching it up ever since. The initial release of Advanced Voice Mode arrived earlier this fall but lacked the video capabilities, much to the disappointment of anyone hoping their AI could finally critique their cooking techniques or tell them if that plant in their living room is dying (spoiler: it probably is).
To sweeten the deal—and presumably distract from the glitches—OpenAI has added a holiday-themed “Santa Mode”, which lets ChatGPT adopt the voice of jolly old Saint Nick himself. Because if there’s one thing missing from your late-night existential queries, it’s Santa Claus chiming in to reassure you that everything’s fine. (You’ll find it under a snowflake icon, of course.)
So, is this a groundbreaking feature or just another overpromised tech gimmick? While the potential is exciting, OpenAI seems more focused on wowing users than perfecting the tech. In the meantime, college grads everywhere can rejoice: ChatGPT will critique your doodles and misinterpret your math problems faster than ever before!
-- This story also covered by Bloomberg, Wall Street Journal, Forbes, TechCrunch, AI Revolution (video) ... and Google
Concise Summary of NY Times Article:
Google’s quantum computer has achieved another “breakthrough” with its new chip, Willow, solving a calculation in under five minutes that would take traditional supercomputers 10 septillion years—a timeline longer than the universe itself. While still experimental, this quantum leap marks progress toward practical applications like drug discovery and AI development. However, with quantum computing still prone to errors and focused on abstract benchmarks, we’re left marveling at potential far-off applications while grappling with today’s mistakes. In the quantum race, the U.S. and China vie for supremacy, but useful real-world quantum applications remain decades away. Until then, it’s all about generating excitement (and funding).
Detailed Summary of Google Article:
Google’s new quantum chip, Willow, is here to remind us how outdated “traditional” supercomputers are. With 105 qubits of state-of-the-art design, Willow accomplished a random circuit sampling task in five minutes—a calculation that would take the fastest supercomputers 10 septillion years. For context, that’s 10 followed by 24 zeroes, and vastly older than the universe. But while Willow is a flex of quantum supremacy, its accomplishments remain esoteric, with little direct real-world utility.
The real game-changer? Error correction. Willow exponentially reduces errors as more qubits are added, a feat quantum researchers have chased for decades. Using grids of increasing complexity (3x3, 5x5, and 7x7 arrays of qubits), Google achieved a “below threshold” milestone—fewer errors as system size scales up. This is the holy grail for practical quantum computing, making Willow the first convincing prototype of a scalable quantum system.
Still, Willow’s achievements rely on specific benchmarks like random circuit sampling (RCS), a computational showpiece that proves quantum processors can do things classical machines can’t—though it has no immediate real-world application. Google aims to bridge this gap by targeting “useful, beyond-classical” computations that tackle problems like drug discovery, battery design, and even optimizing AI systems.
Fabricated in Google’s custom Santa Barbara facility, Willow represents a quality-over-quantity approach. Its qubits boast a 5x improvement in performance compared to its predecessor, with T1 coherence times nearing 100 microseconds—key for longer, more complex computations.
While Willow dazzles with theoretical feats, the ultimate goal remains elusive: commercially relevant applications. Google acknowledges that quantum computing’s “killer apps”—e.g., modeling quantum effects for medicine or energy solutions—are still years away. For now, Willow serves as a glimpse into a future where quantum computing partners with AI to tackle humanity’s thorniest problems, from fusion energy to smarter algorithms.
Google wraps up with optimism, inviting researchers to join their quantum journey and learn about error correction via a new Coursera course. While the real-world utility of quantum computing remains nascent, Willow proves we’re inching closer to making the unimaginable not just possible but practical. Until then, we’ll keep marveling at numbers that make the age of the universe feel trivial.
No comments:
Post a Comment
Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.