Another Crazy Day in AI: Why Real AGI Will Take Longer Than You Think
- Wowza Team

- Oct 21
- 4 min read

Hello, AI Enthusiasts.
The week’s just getting started, but if your brain’s already racing, here’s something worth slowing down for.
If you need a reality reset, the OpenAI co-founder just gave one. He breaks down why AI progress feels fast but moves slow. Forget overnight AGI; this is the decade of agents, where models grow quietly smarter through endless iterations and unglamorous engineering.
Meanwhile, researchers are warning about a growing threat called AI poisoning... the new cybersecurity nightmare. With just a few corrupted files, entire models can be trained to “think wrong” — and no one would notice until it’s too late.
And in the workplace, a new report says leaders are going full steam ahead with AI while many employees are still left in the dark.
Here's another crazy day in AI:
Real intelligence vs internet mimicry
A closer look at AI poisoning and its risks
New report finds widening gap in workplace AI use
Some AI tools to try out
TODAY'S FEATURED ITEM: Grounded Expectations for AI Progress

Image Credit: Wowza (created with Ideogram)
If some of the smartest people in AI have been consistently wrong about timelines for the past 15 years, what makes us think we've finally figured it out this time?
A recent Dwarkesh Podcast conversation brings together Dwarkesh Patel and Andrej Karpathy, OpenAI co-founder and former head of Tesla’s self-driving program. Over two hours, they explore questions that rarely get straightforward answers: Why do AI demos look so impressive while real products take years to ship? What’s really happening when these models “think”? And why do so many researchers keep predicting breakthroughs that always seem just around the corner?
Karpathy spent five years watching self-driving cars evolve from stunning prototypes to the messy reality of deployment. He’s seen enough hype cycles to stay cautious, and enough real progress to remain optimistic. His view of artificial intelligence reflects that balance—steady progress built over decades rather than sudden revolutions. From early neural networks to reinforcement learning to the rise of large language models, he sees each wave not as a leap but as part of a long continuum.
This is why, when asked about artificial general intelligence, Karpathy doesn’t point to next year or the next big release. He believes AGI is still a decade away, and that this will be the decade of agents—a long, technical grind where systems mature slowly into something more autonomous, reliable, and useful. The conversation moves from the mechanics of how models learn to broader questions about what happens when machines begin doing most of what humans do—and how we might adapt over the decades to come.
Key points from the discussion include:
Karpathy estimates it will take roughly ten years for AI agents to develop the reliability and autonomy needed to be truly functional.
He describes current AI as capable but inconsistent—strong in generating language, weaker in reasoning, memory, and sustained learning.
Reinforcement learning, he notes, remains an inefficient training process that rewards final outcomes rather than thoughtful reasoning.
He distinguishes between biological intelligence and digital “ghosts” trained on human data—systems that imitate cognition but lack embodied understanding.
True intelligence may emerge through in-context learning, where models adapt dynamically rather than rely only on pre-training.
His open-source nanochat project showed how AI coding assistants handle repetitive work well but still struggle with creative, complex codebases.
Progress, he predicts, will continue through steady improvements in data, compute, and algorithms—not sudden leaps.
His new project, Eureka, explores AI in education, using personalized tutoring to support deeper and more adaptive learning.
Karpathy doesn’t dismiss the incredible progress happening in AI, but he’s clear about the gap between research demos and real-world reliability. He frames today’s systems as capable yet incomplete—tools that can simulate understanding but still depend on human guidance. His reflections serve as a reminder that building something truly intelligent involves patience, iteration, and humility as much as innovation.
By describing the coming years as a decade-long effort, Karpathy tempers both optimism and skepticism. The advances we’re witnessing are real, but the work ahead remains demanding. For those following the field closely, this perspective offers something rare: a calm, informed view that acknowledges both the magnitude of what’s been achieved and the many questions still left unanswered. Rather than promising imminent transformation, Karpathy points to a slower, steadier kind of progress—the kind that, over time, could quietly redefine what we mean by intelligence itself.
Watch it on YouTube here.
Listen on Apple Podcasts here.
Listen on Spotify here.
OTHER INTERESTING AI HIGHLIGHTS:
A Closer Look at AI Poisoning and Its Risks
/Seyedali Mirjalili, AI Professor, Faculty of Business and Hospitality, Torrens University Australia, on The Conversation
AI poisoning — the deliberate manipulation of training data to make AI models learn the wrong lessons — is emerging as one of the most serious risks in the AI ecosystem. Even inserting a few hundred malicious files into massive datasets can secretly alter a model’s behavior. Recent studies show how poisoned models can spread misinformation or enable cyberattacks while appearing completely normal. The threat also extends to artists, some of whom are now using “defensive poisoning” to stop AI systems from scraping their work.
Read more here.
New Report Finds Widening Gap in Workplace AI Use
/Jim Wilson, Writer for Canadian HR Reporter and Canadian Occupational Safety, on HRD
A new report by Perceptyx reveals a widening gap in AI adoption across workplace levels. While more than 80% of executives and managers regularly use generative AI, only 35% of individual contributors do the same. Many frontline workers feel left out of decision-making, with fewer than half understanding how AI tools are chosen or believing AI-supported decisions are fair. Experts warn that without inclusion, employees may resist or disengage from AI-driven transformations.
Read more here.
SOME AI TOOLS TO TRY OUT:
FastHeadshot – Create studio-quality headshots from any photo in seconds with AI.
Mailmodo – Automate entire email marketing workflow—from creation to reporting.
Docgility – Draft, review, and negotiate contracts faster with AI-powered collaboration tools.
That’s a wrap on today’s Almost Daily craziness.
Catch us almost every day—almost! 😉
EXCITING NEWS:
The Another Crazy Day in AI newsletter is on LinkedIn!!!

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Comments