top of page
Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Made it through another week? Take a breath. Here’s what moved the needle in AI.


OpenAI just rolled out a new model built for people doing real, messy work. Early testers say it handles long-running tasks, heavy tooling, and complex reasoning better than before, though results still depend on how you use it.


IBM and Pearson, meanwhile, are focusing on how AI can help people learn new skills faster without losing the human side of education.


And if your browser already feels chaotic, Google Labs is quietly experimenting with a new way to turn open tabs into something actually useful.


That’s enough future-thinking for now. Enjoy the weekend.


Here's another crazy day in AI:

  • GPT-5.2 launches with expert-level claims

  • IBM, Pearson launch global AI education partnership

  • Google Labs launches GenTabs for task-driven web navigation

  • Some AI tools to try out

TODAY'S FEATURED ITEM: The Newest Model from OpenAI

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


What if the next leap in technology isn’t just smarter… but finally capable of handling the work you never have time for?


OpenAI has just released GPT-5.2, their newest and most advanced model designed for deep professional work, long-running tasks, and tool-heavy projects. This update highlights major improvements across coding, analytics, reasoning, safety, speed, and real-world task execution. A range of early testers—from productivity platforms to developer tools—reported stronger results in their respective environments, though performance will naturally depend on the specific task and how it's used.



Source: OpenAI
Source: OpenAI


Here's what the release includes:

  • Performs at expert level on 70.9% of professional tasks across 44 occupations in the GDPval benchmark, completing work at notably faster speeds and lower costs than human professionals

  • Achieves 55.6% on SWE-Bench Pro, handling real-world software engineering problems across four programming languages, with 80% accuracy on the earlier SWE-bench Verified test

  • Produces 30% fewer errors compared to its predecessor, though OpenAI notes users should still verify outputs for anything critical

  • Maintains accuracy across documents up to 256,000 tokens, making it more practical for analyzing lengthy contracts, reports, or complex multi-file projects

  • Scores 98.7% on tool-calling benchmarks measuring the ability to coordinate multiple steps, such as customer service workflows that require accessing different systems

  • Reaches 93.2% on GPQA Diamond, a graduate-level science assessment, and solves 40.3% of expert-level mathematics problems

  • Offers three configurations—Instant for quick everyday tasks, Thinking for complex reasoning work, and Pro for situations where maximum accuracy justifies longer wait times





The numbers look impressive, and companies like Notion, Shopify, and Box reported seeing real improvements during testing. But there's an obvious question here: how much of this actually matters when you're sitting at your desk trying to get work done? Benchmarks measure specific things under ideal conditions—well-defined problems with clear success criteria. Most professional work doesn't look like that. It's messy, it changes halfway through, and it requires judgment calls based on context that's hard to explain, let alone feed into a prompt. OpenAI's own advice to double-check critical work suggests they know there's a gap between what performs well in tests and what you can rely on without supervision.


What's interesting is that GPT-5.2 seems to address some of the more frustrating limitations of earlier models—better at handling long documents, fewer instances of making things up, more reliable when it needs to use tools or complete multi-step tasks. Those are practical improvements that could genuinely save time if you're working with the kinds of tasks where AI already fits reasonably well. The pricing went up for API users, which tells you OpenAI thinks the quality boost is worth it, but also that not everyone will need or want to pay for that extra capability. Whether GPT-5.2 lives up to its claims will depend less on benchmark scores and more on whether it can handle the unpredictable, ambiguous situations that make up most people's actual workdays.




Check it out here.

Watch the news here.

OTHER INTERESTING AI HIGHLIGHTS:


IBM, Pearson Launch Global AI Education Partnership

/IBM Newsroom


IBM and Pearson are teaming up to create a new generation of AI-powered learning tools aimed at helping people adapt to a fast-changing workforce. Their collaboration focuses on personalized, skills-based learning solutions built on IBM’s watsonx platform, with the goal of improving how organizations upskill employees at scale. Pearson will also develop a custom AI learning platform with IBM to support better workflows, data-driven decisions, and new educational products. Beyond tools, both companies plan to explore ways to verify AI agents’ capabilities so organizations can deploy them more confidently.



Read more here.


Google Labs Launches GenTabs for Task-Driven Web Navigation

/Manini Roy, Senior Product Manager for AI Innovation, Chrome, and Amit Pitaru, Director, Creative Lab, on Google Blogs – The Keyword


Google Labs is introducing Disco, a new experimental space designed to rethink how we browse and interact with the web. Its first feature, GenTabs, uses Gemini 3 to understand your open tabs and tasks, then builds interactive mini-apps to help you get things done without writing code. Early testers are already using it for everything from trip planning to creating learning tools for kids. Google is opening a waitlist as it gathers feedback on what works, what doesn’t, and what future browsing might look like.



Check it out here.

SOME AI TOOLS TO TRY OUT:


  • Purpose – AI mentor offering deep, personalized guidance anytime you need it.

  • Dex – Turn Chrome into an AI workspace that remembers your tasks and context.

  • AppWizzy – Build full-stack apps and websites by chatting with AI.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


How’s your week moving? Because AI developments this week feel like they’re sprinting ahead of the conversation.


A seasoned computer scientist — the one whose textbook trained today’s leaders — is raising alarms about AGI, urging the world to pay attention before the next breakthroughs get out ahead of oversight.


Meanwhile, giants are trying to keep AI agents from going rogue in their own silos. Collaboration might save us… or maybe just slow the chaos.


And somewhere between all that, OpenAI made ChatGPT Voice part of the main chat experience, so now you can just talk and it actually listens.


It’s a lot to take in… and the week’s not over yet.


Here's another crazy day in AI:

  • The AI textbook author on what keeps him up at night

  • Big AI companies back new agent standards effort

  • ChatGPT Voice now built directly into chat

  • Some AI tools to try out

TODAY'S FEATURED ITEM: The Questions We Haven't Answered About AGI

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


Are we building something we won't be able to control?


In a recent episode of The Diary Of A CEO podcast, host Steven Bartlett sits down with Professor Stuart Russell, a computer scientist who has spent over 40 years teaching and researching artificial intelligence at UC Berkeley. Russell wrote the textbook that many of today's AI company leaders studied from. Now, he's working 80 to 100 hours a week trying to get people to pay attention to what he sees as a critical problem with how we're developing AI systems.


The conversation covers the technical and societal challenges surrounding AGI development. Russell shares conversations he's had with AI company executives, examines economic incentives driving development, and explores what a world with highly capable AI systems might actually look like.





Some points they dug into:

  • Only a few companies and leaders currently influence how advanced systems are built and deployed.

  • Russell reflects on why intelligence has historically shaped control and how this applies to systems that may soon surpass human capability.

  • He describes how competitive pressure pushes development forward quickly, even when risks are acknowledged privately.

  • The assumption that advanced systems can simply be shut down is challenged by examples from existing research.

  • Current systems are designed to imitate human reasoning, which introduces technical and ethical complications when used broadly.

  • The discussion explores long-term possibilities around work, the economy, and how humans might define their roles in a more automated world.

  • Governments often lack the resources and structure to regulate at the same pace as industry progress.

  • Russell points to ongoing work focused on developing systems that respond more reliably to human intentions and boundaries.




There's a lot to unpack here. Russell isn't saying AI research should stop completely—his concern seems to center on timing and preparation. Do we understand enough about safety and control before these systems become significantly more capable? The economics are undeniably powerful. Companies are pouring in massive investments, competition is fierce, and the potential applications could be transformative. But the safety questions he raises are genuinely complex, and by his account, we don't have solid answers to many of them yet. What makes this conversation particularly interesting is that it's coming from someone who spent decades in the field and literally taught generations of AI researchers. He's not an outsider critiquing from a distance—he's someone deeply familiar with both the promise and the technical challenges.


The discussion also touches on broader questions that go beyond the purely technical. What happens to work, purpose, and social structure when AI capabilities expand dramatically? How do societies make decisions about technology that affects everyone when development is concentrated in a handful of companies? Russell is candid about not having all the answers, particularly around what a functional future with advanced AI actually looks like for most people day-to-day. He suggests we should probably work through some of these questions deliberately rather than just responding to whatever happens. Different people will have different takes on whether his concerns are proportionate or whether the pace of development needs adjustment. But the core issues he brings up—about verifiable safety, societal readiness, and who gets a voice in how this technology develops—seem like reasonable things to think about as AI systems become more integrated into everyday life.




Watch on YouTube here.

Listen on Apple Podcasts here.

Listen on Spotify here.

OTHER INTERESTING AI HIGHLIGHTS:


Big AI Companies Back New Agent Standards Effort

/Rebecca Bellan, Senior Reporter, on TechCrunch


The Linux Foundation has launched the Agentic AI Foundation (AAIF), an initiative aimed at preventing AI agents from becoming fragmented across proprietary ecosystems. Major players including OpenAI, Anthropic, and Block are contributing foundational frameworks and protocols to promote interoperability. The effort reflects a broader industry push toward open standards that make AI agents safer, more consistent, and easier for developers to integrate. While the long-term impact remains to be seen, the move signals momentum toward a more unified agent ecosystem.



Read more here.


ChatGPT Voice Now Built Directly Into Chat

/OpenAI


OpenAI has integrated ChatGPT Voice directly into the main chat experience, removing the need to switch modes. Users can now speak naturally, see real-time transcriptions, and view visuals like maps and images as part of the conversation. The update is rolling out across mobile and web, aiming to make voice interactions more seamless and intuitive. Those who prefer the previous setup can still enable the separate mode in settings.



Check it out here.

SOME AI TOOLS TO TRY OUT:


  • Documentation – Build and update product documentation effortlessly with AI.

  • Speechify – Text-to-speech, voice typing, and AI-powered browsing assistant.

  • Strater – Turn videos, PDFs, and articles into smart study materials with AI.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Hey there, made it to the weekend? Here’s a thought to carry with you: AI is changing work, but not always the way headlines suggest.


Jensen Huang joined Joe Rogan to talk about jobs. Some jobs may disappear, he says, but entirely new—and sometimes surprising—roles could emerge in their place.


Meanwhile, Meta is expanding Meta AI’s access to real-time content, pulling in breaking news, entertainment, and lifestyle updates from big-name partners to keep you better informed.


Across the AI scene, the pace is dizzying. Sam Altman steps into the spotlight, Nvidia faces chip challenges, and Replit partners with Google Cloud to supercharge coding... everyone’s scrambling to keep up.


Here's another crazy day in AI:

  • Jensen Huang's surprising take on job survival

  • Meta AI now pulls from more live news sources

  • AI giants feel the pressure as competition surges

  • Some AI tools to try out

TODAY'S FEATURED ITEM: Jensen Huang on Tomorrow's Job Market

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


What kinds of jobs might emerge once technology becomes advanced enough to create needs we haven't even imagined yet?


Jensen Huang, CEO of Nvidia, sat down with Joe Rogan recently to discuss artificial intelligence and its impact on employment. The conversation, covered by Business Insider reporter Polly Thompson, offers a different angle on a question many people are asking: what happens to jobs when AI gets better at doing things humans currently do? Huang's take includes both the sobering reality that some jobs will disappear and the possibility that entirely new ones—some quite unexpected—will emerge in their place.





What the story tells us:

  • Jobs built on purpose rather than repetitive tasks are more resilient than we think.

  • Automation will hit task-only roles first, but Huang believes human-centered professions will evolve, not disappear.

  • Entirely new job markets could form around robotics—manufacturing, maintenance, customization, and even “robot apparel.”

  • The shift toward automation mirrors past tech revolutions: disruptive, uncomfortable, but ultimately generative.

  • Huang admits no one truly knows the “end goal” of today’s fast-moving technology—but expects progress to unfold gradually, not in sudden leaps.

  • Growing safety practices in AI development—tool use, reflection, research before generation—are reducing common problems like hallucinations.

  • Concerns about AI’s long-term risks are, in Huang’s view, actively steering the field toward more responsible and reliable systems.





The idea of designing clothes for robots sounds almost absurd at first, but it actually points to something we've seen before with other technologies. Think about how many jobs exist today around smartphones or social media that would have seemed ridiculous to explain to someone in 1990. Huang's radiologist example is interesting too—the profession adapted rather than disappeared when technology took over part of what they do. That said, not every job has that kind of flexibility built in, and plenty of people work in roles that really are just about completing specific tasks efficiently.


What stands out most is the uncertainty. Even someone running a major AI company admits he doesn't know how this plays out. History suggests new jobs appear when technology changes things, but history also shows that transitions can be messy and uneven. Some people will find new opportunities, others will struggle to adapt, and the timeline matters a lot for those caught in between. It's worth paying attention to how this unfolds, because the answers will affect far more than just the tech industry.




Read the full article here.

Watch on YouTube here.

Listen on Spotify here.

OTHER INTERESTING AI HIGHLIGHTS:


Meta AI Now Pulls From More Live News Sources

/Meta Newsroom


Meta is expanding Meta AI’s access to real-time content, bringing in a wider mix of breaking news, entertainment, lifestyle updates, and other timely stories across its apps and devices. The update includes new partnerships with major publishers such as CNN, Fox News, Le Monde Group, USA TODAY, and more, allowing Meta AI to surface information from a broader set of credible sources. These integrations also link users directly to partner articles, offering greater context while helping publishers reach new audiences. Meta says this is just the start, with plans to continue adding content sources to improve accuracy, balance, and responsiveness in fast-moving news environments.



Read more here.


AI Giants Feel The Pressure As Competition Surges

/Deirdre Bosa, TechCheck Anchor, on CNBC Television


A fast-moving week in AI has key players racing to keep up. OpenAI’s Sam Altman is ramping up public visibility amid reports of internal “Code Red,” while Nvidia’s Jensen Huang faces mounting geopolitical pressure as U.S.–China tensions reshape chip supply dynamics. At the same time, Replit has struck a multiyear partnership with Google Cloud to expand AI-driven “vibecoding,” bringing advanced tools to enterprise customers. The pace reflects an industry where none of the major players can afford to slow down, with competition, infrastructure challenges, and new alliances all intensifying.



Check it out here.

SOME AI TOOLS TO TRY OUT:


  • Contenov Turn any topic into a clear, actionable content strategy with AI.

  • BitterBotAI assistant for tasks, research, data analysis, and everyday problem-solving.

  • FellowRecord meetings and get AI summaries with automatic follow-ups.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Copyright Wowza, inc 2025
bottom of page