top of page
Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


The week’s cruising along, and so are the AI headlines.


In a recent podcast, a leading financial historian explores what centuries of market manias can teach us about today’s AI-fueled excitement. His take: even smart investors can get swept up when innovation turns into speculation. Maybe we’re not irrational… just historically consistent.


Netflix is taking that same energy to Hollywood, saying AI’s here to enhance, not erase, creativity.


And Samsung’s Galaxy XR takes things further, fusing AI with immersive tech to reimagine how we experience the digital world.


Here's another crazy day in AI:

  • What financial historians see in the AI investment wave

  • Netflix doubles down on Gen AI amid industry debate

  • Samsung debuts its first AI-native XR device

  • Some AI tools to try out


TODAY'S FEATURED ITEM: History’s Warning for the Tech Boom

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


What if the excitement fueling today’s AI investments looks a lot like past financial bubbles?


In The Big View podcast, Peter Thal Larsen, Global Editor at Reuters Breakingviews, sits down with Edward Chancellor, a renowned financial historian and author of Devil Take the Hindmost: A History of Financial Speculation. Together, they explore what history’s biggest financial bubbles—from the dotcom era to the 2008 crisis—can teach us about today’s AI-fueled market surge. Chancellor shares insights from decades of studying speculative bubbles, explaining how investors might spot warning signs, why rational people make seemingly irrational investment decisions, and what typically brings these booms to an end. The discussion also tackles an interesting question: can bubbles actually benefit society, even when they leave investors nursing heavy losses?




What the discussion points to:

  • How valuation tools such as the Shiller PE ratio can signal when markets are overheating, even if they can’t predict when corrections will happen.

  • Why rapid price gains and massive capital inflows often mark the early signs of a bubble taking shape.

  • The scale of AI investments, with trillions of dollars flowing into chips and data centers, much of it financed by debt or long-term commitments.

  • The competitive pressure pushing companies and investors to stay in the race—sometimes more out of necessity than confidence.

  • How bubbles can leave behind lasting infrastructure, from railways to fiber optics, even when they end in financial loss.

  • The role of rising interest rates and economic slowdowns in ending speculative surges across history.



Chancellor's analysis reveals how bubbles emerge from rational behavior under competitive pressure. Tech executives watching rivals invest billions in AI face real strategic concerns. Staying cautious might protect resources now, but falling behind technologically could prove fatal later. Investment managers deal with similar tensions. Clients expect performance, and missing a major rally invites tough questions regardless of long-term wisdom. These individual calculations, reasonable on their own, can combine to create unsustainable market conditions.


The discussion avoids simplistic conclusions. Chancellor acknowledges he cannot predict when or whether the AI boom will end badly. Bubbles often last longer than critics expect, making timing extremely difficult even for seasoned investors. The 1990s telecom bubble destroyed considerable wealth but also funded fiber optic networks that became essential infrastructure. Whether AI follows a similar path, expensive for participants but eventually productive for society, remains uncertain. The podcast examines different possibilities without claiming to know which will materialize.


Larsen and Chancellor offer practical ways to think about speculative markets rather than definitive forecasts. Recognizing a bubble and timing its peak are completely different challenges. Understanding how rational choices by many actors can produce collectively irrational outcomes helps explain recurring patterns in financial markets. History provides context, not predictions. Previous technological revolutions created similar investment frenzies with mixed results. Some delivered lasting value, others left wreckage. Examining those precedents gives investors and observers better tools for evaluating current developments, even when the future stays genuinely unclear.




Watch it on YouTube here.

Listen on Apple Podcasts here.

Listen on Spotify here.

OTHER INTERESTING AI HIGHLIGHTS:


Netflix Doubles Down on Gen AI Amid Industry Debate

/Amanda Silberling, Senior Writer, on TechCrunch


As Hollywood continues to wrestle with the role of AI in storytelling, Netflix is leaning in. The company says it’s “all in” on using generative AI to help creators work smarter — not to replace them. From special effects in The Eternaut to digital de-aging in Happy Gilmore 2, Netflix sees AI as a tool to enhance creativity and production, not a threat to it. Still, the move reignites debates across the entertainment industry about where human artistry ends and machine assistance begins.



Read more here.


Samsung Debuts Its First AI-Native XR Device

/Samsung Newsroom


Samsung has officially launched Galaxy XR, the first device built on the new Android XR platform created with Google and Qualcomm. Designed to merge AI and immersive technology, Galaxy XR marks the start of Samsung’s next computing frontier — combining voice, vision, and gesture for natural, multimodal interaction. With built-in Gemini integration, 3D video capabilities, and ergonomic design, it aims to reshape how people work, learn, and explore digital worlds.



Read more here.

SOME AI TOOLS TO TRY OUT:


  • Scrybe Voice – Turns spoken thoughts into viral social posts w/ natural AI voice and flair.

  • Krea Realtime – Instantly generate, edit, and restyle videos or images in real time.

  • Sparks – Build custom AI agents, collaborate in real time, and publish to the agent store.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Long day? Let’s catch you up real quick.


OpenAI’s ChatGPT Atlas might be the most ambitious take on browsing yet — it talks, learns, and keeps track of your digital path.


That same feature, though, has experts worried about how much memory AI should really have. The debate grows louder: should machines know us this well?


Google’s taking the opposite route — helping people build their own AI skills instead of just relying on the tech.


Hard to believe it’s just Tuesday, right?



Here's another crazy day in AI:

  • OpenAI launches Atlas browser with built-in ChatGPT

  • The hidden costs of AI memory

  • Google’s new hub for AI learning

  • Some AI tools to try out


TODAY'S FEATURED ITEM: ChatGPT Brings Agent Mode to Web Browsing

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


Another AI browser entering the scene, or the one that finally gets it right?


OpenAI has launched ChatGPT Atlas, a revolutionary web browser that integrates ChatGPT as its core component rather than just an add-on feature. Announced by Sam Altman and the Atlas team, this launch reimagines web browsing by replacing traditional URL-based navigation with conversational AI interaction. The browser maintains familiar features like tabs, bookmarks, and autofill while introducing three groundbreaking capabilities: contextual chat that follows users across all websites, browser memory that personalizes the experience over time, and agent mode that autonomously performs tasks on behalf of users.




Here's what you need to know:

  • Sidebar assistance on any page – Every website gets an "Ask ChatGPT" button that opens a sidebar capable of reading the current page, whether you need help summarizing an article, understanding code, or comparing products without copying text between tabs

  • Memory that builds context – The browser can remember information from your browsing sessions to offer more personalized help and suggestions, though this feature is completely optional and you can review, manage, or delete stored memories anytime

  • Agent mode for hands-off tasks – ChatGPT can take control of the browser to complete multi-step processes like organizing documents across Google Docs and Linear, conducting research across multiple sites, or filling shopping carts; currently available in preview for Plus, Pro, and Business subscribers

  • Inline text refinement – Highlight text in any input field across the web and ask ChatGPT to edit, improve, or rewrite it without switching to another application

  • Conversational search results – Search queries return answers you can discuss back and forth with ChatGPT, while still offering traditional web links, images, videos, and news in separate tabs

  • Control over access and privacy – Choose which sites ChatGPT can see, turn memory features on or off at will, or use incognito mode to browse without any ChatGPT involvement or data collection

  • Security boundaries – The agent cannot run code, download files, or add extensions, and it asks for permission before touching financial websites; users can also run it in logged-out mode to restrict access to personal accounts



What OpenAI is proposing with Atlas is fundamentally different from adding AI features to an existing browser. The company designed Atlas around the idea that you'll be talking to ChatGPT regularly as you browse, making it a constant companion rather than an occasional tool. This approach will appeal to some users while feeling excessive to others, depending largely on how people actually use the web day-to-day. The agent mode stands out as the most ambitious feature—it goes beyond answering questions to actually performing tasks by navigating sites and interacting with pages on your behalf. That level of automation could be incredibly useful for repetitive work, but it also means trusting software to make decisions and take actions with access to your logged-in accounts and personal information.


OpenAI has been straightforward about the limitations and potential problems. They've documented that agent mode can make errors and remains vulnerable to malicious instructions that might be embedded in web content. The company built in various protections and gives users control over what the browser can see and do, but they acknowledge these safeguards won't stop everything. The real test will be how Atlas performs in everyday situations—whether the memory system feels genuinely helpful or just intrusive, if the agent can handle common tasks reliably enough to trust, and whether having ChatGPT always available actually improves the browsing experience or just adds complexity. How users respond to this browser over the next few months will reveal a lot about whether deeply integrated AI is something people want in their daily web use or if simpler, more targeted AI features are enough.




Read the full article here.

Watch the livestream replay here.

OTHER INTERESTING AI HIGHLIGHTS:


The Hidden Costs of AI Memory

/Gathoni Ireri, Junior Research Scholar, Contributor, on Tech Policy Press


AI systems are learning to “remember” us — storing and recalling details across interactions to deliver more personalized experiences. But as platforms like ChatGPT, Gemini, and Anthropic expand these long-term memory features, questions around transparency, consent, and manipulation are becoming urgent. Research shows that personalization can make AI far more persuasive, raising ethical concerns about influence and autonomy. Without clear safeguards, the very memory that makes AI more useful could also make it more dangerous.



Read more here.


Google’s New Hub for AI Learning

/Karen Dahut, CEO, Google Public Sector, on Google Blogs — The Keyword


Google just launched Google Skills, a unified platform offering nearly 3,000 courses, labs, and credentials designed to help people and organizations build real-world AI expertise. The new learning hub combines resources from Google Cloud, DeepMind, and Grow with Google into one accessible experience — complete with gamified lessons, team leaderboards, and certifications. Whether you’re a student, a developer, or an enterprise leader, Google Skills aims to make learning AI practical, interactive, and fun — with many options available at no cost.



Read more here.

SOME AI TOOLS TO TRY OUT:


  • DocsAlot – Auto-generate and update docs, tutorials, and guides as code changes.

  • Mockuplabs – Instantly turn any image into a realistic, professional product mockup.

  • Simplora – Turns complex meetings into clear summaries and smart follow-ups real time.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


The week’s just getting started, but if your brain’s already racing, here’s something worth slowing down for.


If you need a reality reset, the OpenAI co-founder just gave one. He breaks down why AI progress feels fast but moves slow. Forget overnight AGI; this is the decade of agents, where models grow quietly smarter through endless iterations and unglamorous engineering.


Meanwhile, researchers are warning about a growing threat called AI poisoning... the new cybersecurity nightmare. With just a few corrupted files, entire models can be trained to “think wrong” — and no one would notice until it’s too late.


And in the workplace, a new report says leaders are going full steam ahead with AI while many employees are still left in the dark.


Here's another crazy day in AI:

  • Real intelligence vs internet mimicry

  • A closer look at AI poisoning and its risks

  • New report finds widening gap in workplace AI use

  • Some AI tools to try out


TODAY'S FEATURED ITEM: Grounded Expectations for AI Progress

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


If some of the smartest people in AI have been consistently wrong about timelines for the past 15 years, what makes us think we've finally figured it out this time?


A recent Dwarkesh Podcast conversation brings together Dwarkesh Patel and Andrej Karpathy, OpenAI co-founder and former head of Tesla’s self-driving program. Over two hours, they explore questions that rarely get straightforward answers: Why do AI demos look so impressive while real products take years to ship? What’s really happening when these models “think”? And why do so many researchers keep predicting breakthroughs that always seem just around the corner?


Karpathy spent five years watching self-driving cars evolve from stunning prototypes to the messy reality of deployment. He’s seen enough hype cycles to stay cautious, and enough real progress to remain optimistic. His view of artificial intelligence reflects that balance—steady progress built over decades rather than sudden revolutions. From early neural networks to reinforcement learning to the rise of large language models, he sees each wave not as a leap but as part of a long continuum.


This is why, when asked about artificial general intelligence, Karpathy doesn’t point to next year or the next big release. He believes AGI is still a decade away, and that this will be the decade of agents—a long, technical grind where systems mature slowly into something more autonomous, reliable, and useful. The conversation moves from the mechanics of how models learn to broader questions about what happens when machines begin doing most of what humans do—and how we might adapt over the decades to come.




Key points from the discussion include:

  • Karpathy estimates it will take roughly ten years for AI agents to develop the reliability and autonomy needed to be truly functional.

  • He describes current AI as capable but inconsistent—strong in generating language, weaker in reasoning, memory, and sustained learning.

  • Reinforcement learning, he notes, remains an inefficient training process that rewards final outcomes rather than thoughtful reasoning.

  • He distinguishes between biological intelligence and digital “ghosts” trained on human data—systems that imitate cognition but lack embodied understanding.

  • True intelligence may emerge through in-context learning, where models adapt dynamically rather than rely only on pre-training.

  • His open-source nanochat project showed how AI coding assistants handle repetitive work well but still struggle with creative, complex codebases.

  • Progress, he predicts, will continue through steady improvements in data, compute, and algorithms—not sudden leaps.

  • His new project, Eureka, explores AI in education, using personalized tutoring to support deeper and more adaptive learning.




Karpathy doesn’t dismiss the incredible progress happening in AI, but he’s clear about the gap between research demos and real-world reliability. He frames today’s systems as capable yet incomplete—tools that can simulate understanding but still depend on human guidance. His reflections serve as a reminder that building something truly intelligent involves patience, iteration, and humility as much as innovation.


By describing the coming years as a decade-long effort, Karpathy tempers both optimism and skepticism. The advances we’re witnessing are real, but the work ahead remains demanding. For those following the field closely, this perspective offers something rare: a calm, informed view that acknowledges both the magnitude of what’s been achieved and the many questions still left unanswered. Rather than promising imminent transformation, Karpathy points to a slower, steadier kind of progress—the kind that, over time, could quietly redefine what we mean by intelligence itself.




Watch it on YouTube here.

Listen on Apple Podcasts here.

Listen on Spotify here.

OTHER INTERESTING AI HIGHLIGHTS:


A Closer Look at AI Poisoning and Its Risks

/Seyedali Mirjalili, AI Professor, Faculty of Business and Hospitality, Torrens University Australia, on The Conversation


AI poisoning — the deliberate manipulation of training data to make AI models learn the wrong lessons — is emerging as one of the most serious risks in the AI ecosystem. Even inserting a few hundred malicious files into massive datasets can secretly alter a model’s behavior. Recent studies show how poisoned models can spread misinformation or enable cyberattacks while appearing completely normal. The threat also extends to artists, some of whom are now using “defensive poisoning” to stop AI systems from scraping their work.



Read more here.


New Report Finds Widening Gap in Workplace AI Use

/Jim Wilson, Writer for Canadian HR Reporter and Canadian Occupational Safety, on HRD


A new report by Perceptyx reveals a widening gap in AI adoption across workplace levels. While more than 80% of executives and managers regularly use generative AI, only 35% of individual contributors do the same. Many frontline workers feel left out of decision-making, with fewer than half understanding how AI tools are chosen or believing AI-supported decisions are fair. Experts warn that without inclusion, employees may resist or disengage from AI-driven transformations.



Read more here.

SOME AI TOOLS TO TRY OUT:


  • FastHeadshot – Create studio-quality headshots from any photo in seconds with AI.

  • Mailmodo – Automate entire email marketing workflow—from creation to reporting.

  • Docgility – Draft, review, and negotiate contracts faster with AI-powered collaboration tools.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Copyright Wowza, inc 2025
bottom of page