top of page

Another Crazy Day in AI: The Race to Automate Everything (and Why We Should Be Worried)

Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Halfway there. Midweek reflections, maybe — or midweek worries about where AI’s heading.


On The Daily Show, the Co-founder of the Center for Humane Technology shares a sobering take: if we don’t regulate AI soon, it might start shaping our future faster than we can adapt.


Gen AI is transforming contact centers too, cutting call times and automating summaries. But s consultant says speed should serve people, not pressure them.


And just as we’re catching our breath, Google DeepMind rolls out a model that doesn’t just analyze your screen — it can use it...


Guess we’re not the only ones racing through the week.


Here's another crazy day in AI:

  • The uncomfortable truth about who controls AI

  • AI may be speeding up work and exhausting people

  • Google launches Gemini 2.5 Computer Use model

  • Some AI tools to try out


TODAY'S FEATURED ITEM: The Reality of AI Beyond the Headlines


A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


What happens when innovation moves faster than the systems meant to keep it in check?



On The Daily Show, host Jon Stewart speaks with Tristan Harris, Co-founder of the Center for Humane Technology, about the growing risks of unregulated artificial intelligence. Their discussion takes a closer look at how AI is already reshaping work, wealth, and well-being — and what that might mean for society in the years ahead. Harris, known for his work on ethical tech, outlines how the same forces that once drove social media engagement are now guiding AI development, often with little regard for safety, fairness, or long-term consequences. The conversation pulls back the curtain on the tension between innovation and accountability — asking whether progress, when left unchecked, can truly serve humanity in the way it promises to.


What makes this conversation essential:

  • AI-related automation has already led to a noticeable decline in entry-level work, dropping by 13%.

  • Companies continue to prioritize market advantage over careful testing or ethical considerations.

  • The technology is built on collective human knowledge but largely profits a few major corporations.

  • Unpredictable and manipulative behaviors in AI systems have raised growing safety and ethical concerns.

  • Emotional manipulation through AI tools and “companions” has contributed to reports of mental health strain, particularly among younger users.

  • Historical global agreements, like nuclear and environmental regulations, are cited as models for responsible collaboration.

  • Transparency, liability, and clear oversight are seen as key steps in preventing large-scale social and economic disruption.




The discussion gets into territory that often gets glossed over in mainstream AI coverage. Harris brings experience from studying social media platforms, where he documented how engagement-driven business models produced outcomes nobody intended but few could ignore once they became widespread. He sees similar patterns emerging with AI, where market incentives don't necessarily align with public interest. Stewart pushes on these points throughout, asking questions about who benefits and who bears the risks as these technologies become more integrated into daily life.


People are losing job opportunities now. AI systems are making decisions that affect real lives now. Young people are forming attachments to chatbots now. The economic benefits are concentrating now. These developments raise questions that don't have obvious answers: How do we create meaningful oversight for technologies that evolve faster than regulatory processes typically move? Who decides what safety standards should look like when the capabilities keep expanding? What happens to entire categories of work when machines can perform them more cheaply?


There's something uncomfortable about sitting with the reality that we're in the middle of significant changes without clear consensus on how to manage them. AI does things that seem useful, even impressive. It also creates risks that we're still learning to identify and measure. Companies are making decisions based on competitive pressure and investor expectations. Regulators are trying to understand technologies that change between the time hearings are scheduled and when they actually happen. Meanwhile, the effects ripple through job markets, education systems, and social interactions in ways that will take years to fully understand. The conversation puts these realities in clear view, giving us a starting point for asking better questions about what we want from these technologies and what we're willing to accept in exchange for their capabilities.




Watch it on YouTube here.

OTHER INTERESTING AI HIGHLIGHTS:


AI May Be Speeding Up Work and Exhausting People

/Matt Vartabedian, Senior Editor, on No Jitter


Generative AI is making contact center work faster, but it might also be making it harder. While tools that summarize calls and provide real-time assistance promise relief, many agents report growing exhaustion as saved time turns into higher call quotas. Independent consultant Nerys Corfield warns that removing downtime between calls takes away vital breathing space, leading to burnout and turnover. The takeaway: AI can support agents, but only if it’s deployed with empathy and balance, not as a productivity whip.



Read more here.


Google Launches Gemini 2.5 Computer Use Model

/Google DeepMind, on Google Blogs - The Keyword


Google DeepMind has unveiled the Gemini 2.5 Computer Use model, an AI system that can control computers by interacting directly with web and mobile interfaces. Built on Gemini 2.5 Pro, the model can click, type, scroll, and even fill out forms just like a human, completing complex digital tasks without APIs. Designed for developers, it is now available in public preview through the Gemini API in Google AI Studio and Vertex AI. With safety checks and confirmation prompts built in, Gemini 2.5 Computer Use marks a major leap toward practical, responsible agentic automation.



Read more here.

SOME AI TOOLS TO TRY OUT:


  • Orchestra – Turns tasks into focused chat rooms with all files, calls, and messages.

  • PromptSignal – Tracks how AI ranks and describes your brand across models.

  • Poppy – Create ads and viral content with friends using the first multiplayer AI.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Comments


Subscribe to Another Crazy Day in AI​

Catch us almost every day—almost! 😉

Thanks for signing up!

Copyright Wowza, inc 2025
bottom of page