Another Crazy Day in AI: What Happens If We Can't Control What We Build
- Wowza Team

- 3 days ago
- 4 min read

Hello, AI Enthusiasts.
How’s your week moving? Because AI developments this week feel like they’re sprinting ahead of the conversation.
A seasoned computer scientist — the one whose textbook trained today’s leaders — is raising alarms about AGI, urging the world to pay attention before the next breakthroughs get out ahead of oversight.
Meanwhile, giants are trying to keep AI agents from going rogue in their own silos. Collaboration might save us… or maybe just slow the chaos.
And somewhere between all that, OpenAI made ChatGPT Voice part of the main chat experience, so now you can just talk and it actually listens.
It’s a lot to take in… and the week’s not over yet.
Here's another crazy day in AI:
The AI textbook author on what keeps him up at night
Big AI companies back new agent standards effort
ChatGPT Voice now built directly into chat
Some AI tools to try out
TODAY'S FEATURED ITEM: The Questions We Haven't Answered About AGI

Image Credit: Wowza (created with Ideogram)
Are we building something we won't be able to control?
In a recent episode of The Diary Of A CEO podcast, host Steven Bartlett sits down with Professor Stuart Russell, a computer scientist who has spent over 40 years teaching and researching artificial intelligence at UC Berkeley. Russell wrote the textbook that many of today's AI company leaders studied from. Now, he's working 80 to 100 hours a week trying to get people to pay attention to what he sees as a critical problem with how we're developing AI systems.
The conversation covers the technical and societal challenges surrounding AGI development. Russell shares conversations he's had with AI company executives, examines economic incentives driving development, and explores what a world with highly capable AI systems might actually look like.
Some points they dug into:
Only a few companies and leaders currently influence how advanced systems are built and deployed.
Russell reflects on why intelligence has historically shaped control and how this applies to systems that may soon surpass human capability.
He describes how competitive pressure pushes development forward quickly, even when risks are acknowledged privately.
The assumption that advanced systems can simply be shut down is challenged by examples from existing research.
Current systems are designed to imitate human reasoning, which introduces technical and ethical complications when used broadly.
The discussion explores long-term possibilities around work, the economy, and how humans might define their roles in a more automated world.
Governments often lack the resources and structure to regulate at the same pace as industry progress.
Russell points to ongoing work focused on developing systems that respond more reliably to human intentions and boundaries.
There's a lot to unpack here. Russell isn't saying AI research should stop completely—his concern seems to center on timing and preparation. Do we understand enough about safety and control before these systems become significantly more capable? The economics are undeniably powerful. Companies are pouring in massive investments, competition is fierce, and the potential applications could be transformative. But the safety questions he raises are genuinely complex, and by his account, we don't have solid answers to many of them yet. What makes this conversation particularly interesting is that it's coming from someone who spent decades in the field and literally taught generations of AI researchers. He's not an outsider critiquing from a distance—he's someone deeply familiar with both the promise and the technical challenges.
The discussion also touches on broader questions that go beyond the purely technical. What happens to work, purpose, and social structure when AI capabilities expand dramatically? How do societies make decisions about technology that affects everyone when development is concentrated in a handful of companies? Russell is candid about not having all the answers, particularly around what a functional future with advanced AI actually looks like for most people day-to-day. He suggests we should probably work through some of these questions deliberately rather than just responding to whatever happens. Different people will have different takes on whether his concerns are proportionate or whether the pace of development needs adjustment. But the core issues he brings up—about verifiable safety, societal readiness, and who gets a voice in how this technology develops—seem like reasonable things to think about as AI systems become more integrated into everyday life.
Watch on YouTube here.
Listen on Apple Podcasts here.
Listen on Spotify here.
OTHER INTERESTING AI HIGHLIGHTS:
Big AI Companies Back New Agent Standards Effort
/Rebecca Bellan, Senior Reporter, on TechCrunch
The Linux Foundation has launched the Agentic AI Foundation (AAIF), an initiative aimed at preventing AI agents from becoming fragmented across proprietary ecosystems. Major players including OpenAI, Anthropic, and Block are contributing foundational frameworks and protocols to promote interoperability. The effort reflects a broader industry push toward open standards that make AI agents safer, more consistent, and easier for developers to integrate. While the long-term impact remains to be seen, the move signals momentum toward a more unified agent ecosystem.
Read more here.
ChatGPT Voice Now Built Directly Into Chat
/OpenAI
OpenAI has integrated ChatGPT Voice directly into the main chat experience, removing the need to switch modes. Users can now speak naturally, see real-time transcriptions, and view visuals like maps and images as part of the conversation. The update is rolling out across mobile and web, aiming to make voice interactions more seamless and intuitive. Those who prefer the previous setup can still enable the separate mode in settings.
Check it out here.
SOME AI TOOLS TO TRY OUT:
Documentation – Build and update product documentation effortlessly with AI.
Speechify – Text-to-speech, voice typing, and AI-powered browsing assistant.
Strater – Turn videos, PDFs, and articles into smart study materials with AI.
That’s a wrap on today’s Almost Daily craziness.
Catch us almost every day—almost! 😉
EXCITING NEWS:
The Another Crazy Day in AI newsletter is on LinkedIn!!!

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.




Comments