top of page
Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Thursday’s winding down, but the world of tech is wide awake.


Meta leaders talk about how tech, regulation, and automation are reshaping data protection — and why privacy is everyone’s job now. It’s a reminder that trust, once a policy issue, is now a technical one too.


Meanwhile, OpenAI’s Developer Day spotlighted a new era of agentic, modular tools ready to plug straight into business workflows.


And Google’s AI Works initiative wants to make sure small businesses don’t get left behind... rolling out funding and new tools to make AI adoption a little less daunting.


The week’s almost done, but AI’s still clocked in.


Here's another crazy day in AI:

  • Rebuilding compliance for the age of automation

  • Building businesses on modular AI systems

  • Google backs small business innovation with $5M AI initiative

  • Some AI tools to try out


TODAY'S FEATURED ITEM: The New Privacy Playbook for Global Companies


A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


Can companies realistically keep up when privacy regulations are being written faster than products can launch?


In an episode of Privacy Conversations on Meta’s YouTube Channel, Erin Egan, Meta’s Vice President and Chief Privacy Officer for Public Policy, sits down with Susan Cooper, Meta’s Global Data Protection Officer, and Bojana Bellamy, President of the Centre for Information Policy Leadership (CIPL). Their discussion centers on the evolving relationship between technology, regulation, and trust—and how large organizations navigate compliance in a fast-changing digital landscape.


The conversation offers a look into how global data protection standards have expanded since the EU’s General Data Protection Regulation (GDPR), influencing privacy frameworks across countries and industries. It also explores how automation and AI are increasingly woven into risk management, helping teams identify and respond to regulatory changes more efficiently.


Topics explored in the conversation:

  • GDPR's influence beyond Europe, inspiring similar privacy regulations in Brazil, South Korea, India, and several U.S. states

  • Integrated accountability frameworks that bring together privacy, security, AI compliance, and safety considerations under one operational structure

  • Meta's experience restructuring its privacy program after GDPR and an FTC settlement, moving toward centralized product risk management

  • Using automation for routine compliance work like data flow mapping and privacy impact assessments, which frees up teams to handle more complex issues

  • Privacy-aware infrastructure that embeds regulatory requirements into code, reducing reliance on manual compliance checks at each stage

  • How AI helps analyze new regulations, scan code for compliance gaps, and spot patterns across large numbers of products

  • The ongoing importance of human expertise for interpreting unclear regulations and addressing risks that don't fit existing patterns

  • Questions about whether these technology-driven approaches can work for organizations with fewer resources




Bellamy makes an observation early in the conversation that sets the tone: companies today aren't just navigating data protection laws—they're juggling requirements across AI governance, cybersecurity, content moderation, and children's privacy, often with conflicting timelines and regional variations. When a new regulation might give companies only a few months to comply, the traditional methods of spreadsheets and manual reviews start showing their limitations. This isn't just a theoretical problem. It's something compliance teams deal with regularly.


Cooper describes how Meta approached this by embedding compliance directly into their technical infrastructure. Rather than reviewing each feature against a checklist, they built systems that automatically enforce policies like data deletion schedules. It's an engineering-heavy solution that works for them, but it also requires significant investment—both in the technology itself and in the people who can build and maintain it. Not every organization has that capacity, which raises legitimate questions about whether this becomes the standard approach or remains limited to companies with substantial technical resources. There's also the practical matter of keeping these systems updated as regulations change, which they inevitably do.


Both speakers are careful to point out where automation works well and where it doesn't. Repetitive tasks, routine assessments, parsing large amounts of regulatory text—these are areas where AI and automation can handle the heavy lifting. But when it comes to interpreting unclear legal language, or figuring out how to apply existing rules to a completely new type of product, that still requires people who understand the context. The conversation doesn't paint a picture of compliance becoming fully automated. Instead, it suggests that organizations are figuring out how to divide the work between what technology handles efficiently and what genuinely needs human judgment. As both regulations and technology continue to develop, that division of labor will likely keep changing.




Watch it on YouTube here.

OTHER INTERESTING AI HIGHLIGHTS:


Building Businesses on Modular AI Systems

/Eric Sheng, Partner, Silicon Valley, on Bain & Company


OpenAI’s 2025 Developer Day marks a turning point for enterprise AI, ushering in a modular, platform-driven era. From app ecosystems inside ChatGPT to agentic systems like AgentKit and new multimodal tools such as Sora 2, the focus is on making AI integral to how businesses operate. Bain & Company’s Eric Sheng notes that AI is evolving from a set of tools to a living business platform where apps, agents, and models collaborate across workflows. For organizations, success now depends on redesigning systems around modular AI services with strong governance and clear measurement.



Read more here.


Google Backs Small Business Innovation with $5M AI Initiative

/Lisa Gevelber, Founder, Grow with Google, on Google Blogs - The Keyword


Google announced new funding and training to help small businesses thrive in the AI era. Through its AI Works initiative, Google.org will provide $5 million to the U.S. Chamber of Commerce to launch Small Business B(AI)sics, a national program that will train 40,000 businesses in essential AI skills. Alongside the grant, Google introduced a new short course, Make AI Work for You, offering step-by-step guidance and real-world examples on using AI for marketing, operations, and productivity. These efforts aim to make AI more accessible, equipping local entrepreneurs to innovate and grow with confidence.



Read more here.

SOME AI TOOLS TO TRY OUT:


  • Tasklet – Automates workflows and connects AI to your apps, no flowcharts needed.

  • Easy-Peasy – All-in-one AI tool for creating videos, images, music, and text effortlessly.

  • Opal – Lets you chain AI steps visually and build mini-apps using natural language.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Halfway there. Midweek reflections, maybe — or midweek worries about where AI’s heading.


On The Daily Show, the Co-founder of the Center for Humane Technology shares a sobering take: if we don’t regulate AI soon, it might start shaping our future faster than we can adapt.


Gen AI is transforming contact centers too, cutting call times and automating summaries. But s consultant says speed should serve people, not pressure them.


And just as we’re catching our breath, Google DeepMind rolls out a model that doesn’t just analyze your screen — it can use it...


Guess we’re not the only ones racing through the week.


Here's another crazy day in AI:

  • The uncomfortable truth about who controls AI

  • AI may be speeding up work and exhausting people

  • Google launches Gemini 2.5 Computer Use model

  • Some AI tools to try out


TODAY'S FEATURED ITEM: The Reality of AI Beyond the Headlines


A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


What happens when innovation moves faster than the systems meant to keep it in check?



On The Daily Show, host Jon Stewart speaks with Tristan Harris, Co-founder of the Center for Humane Technology, about the growing risks of unregulated artificial intelligence. Their discussion takes a closer look at how AI is already reshaping work, wealth, and well-being — and what that might mean for society in the years ahead. Harris, known for his work on ethical tech, outlines how the same forces that once drove social media engagement are now guiding AI development, often with little regard for safety, fairness, or long-term consequences. The conversation pulls back the curtain on the tension between innovation and accountability — asking whether progress, when left unchecked, can truly serve humanity in the way it promises to.


What makes this conversation essential:

  • AI-related automation has already led to a noticeable decline in entry-level work, dropping by 13%.

  • Companies continue to prioritize market advantage over careful testing or ethical considerations.

  • The technology is built on collective human knowledge but largely profits a few major corporations.

  • Unpredictable and manipulative behaviors in AI systems have raised growing safety and ethical concerns.

  • Emotional manipulation through AI tools and “companions” has contributed to reports of mental health strain, particularly among younger users.

  • Historical global agreements, like nuclear and environmental regulations, are cited as models for responsible collaboration.

  • Transparency, liability, and clear oversight are seen as key steps in preventing large-scale social and economic disruption.




The discussion gets into territory that often gets glossed over in mainstream AI coverage. Harris brings experience from studying social media platforms, where he documented how engagement-driven business models produced outcomes nobody intended but few could ignore once they became widespread. He sees similar patterns emerging with AI, where market incentives don't necessarily align with public interest. Stewart pushes on these points throughout, asking questions about who benefits and who bears the risks as these technologies become more integrated into daily life.


People are losing job opportunities now. AI systems are making decisions that affect real lives now. Young people are forming attachments to chatbots now. The economic benefits are concentrating now. These developments raise questions that don't have obvious answers: How do we create meaningful oversight for technologies that evolve faster than regulatory processes typically move? Who decides what safety standards should look like when the capabilities keep expanding? What happens to entire categories of work when machines can perform them more cheaply?


There's something uncomfortable about sitting with the reality that we're in the middle of significant changes without clear consensus on how to manage them. AI does things that seem useful, even impressive. It also creates risks that we're still learning to identify and measure. Companies are making decisions based on competitive pressure and investor expectations. Regulators are trying to understand technologies that change between the time hearings are scheduled and when they actually happen. Meanwhile, the effects ripple through job markets, education systems, and social interactions in ways that will take years to fully understand. The conversation puts these realities in clear view, giving us a starting point for asking better questions about what we want from these technologies and what we're willing to accept in exchange for their capabilities.




Watch it on YouTube here.

OTHER INTERESTING AI HIGHLIGHTS:


AI May Be Speeding Up Work and Exhausting People

/Matt Vartabedian, Senior Editor, on No Jitter


Generative AI is making contact center work faster, but it might also be making it harder. While tools that summarize calls and provide real-time assistance promise relief, many agents report growing exhaustion as saved time turns into higher call quotas. Independent consultant Nerys Corfield warns that removing downtime between calls takes away vital breathing space, leading to burnout and turnover. The takeaway: AI can support agents, but only if it’s deployed with empathy and balance, not as a productivity whip.



Read more here.


Google Launches Gemini 2.5 Computer Use Model

/Google DeepMind, on Google Blogs - The Keyword


Google DeepMind has unveiled the Gemini 2.5 Computer Use model, an AI system that can control computers by interacting directly with web and mobile interfaces. Built on Gemini 2.5 Pro, the model can click, type, scroll, and even fill out forms just like a human, completing complex digital tasks without APIs. Designed for developers, it is now available in public preview through the Gemini API in Google AI Studio and Vertex AI. With safety checks and confirmation prompts built in, Gemini 2.5 Computer Use marks a major leap toward practical, responsible agentic automation.



Read more here.

SOME AI TOOLS TO TRY OUT:


  • Orchestra – Turns tasks into focused chat rooms with all files, calls, and messages.

  • PromptSignal – Tracks how AI ranks and describes your brand across models.

  • Poppy – Create ads and viral content with friends using the first multiplayer AI.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


After the first rush of the week, here’s a pause worth taking.


Microsoft’s Ideas podcast revealed that even AI can trip over its own cleverness. Researchers found a sneaky vulnerability in AI-driven protein design with real-world consequences.


Meanwhile, AirPods that double as translators make world travel smoother, maybe a little too smooth for language purists.


And OpenAI just turned ChatGPT into a full ecosystem of apps — Spotify, Canva, Coursera... all hanging out in your chat window now.


Here's another crazy day in AI:

  • The Paraphrase Project and biosecurity resilience

  • The promise and peril of real-time AI translation

  • Apps arrive in ChatGPT with tools for developers

  • Some AI tools to try out


TODAY'S FEATURED ITEM: Protecting Progress in Biotechnology


A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


What happens when a discovery meant to advance science also exposes a hidden risk?



In a recent episode of Microsoft Research’s Ideas podcast, Eric Horvitz, Microsoft’s Chief Scientific Officer, joined Bruce Wittmann, Tessa Alexanian, and James Diggans to discuss a finding that brought new attention to biosecurity in the age of AI. The episode, More AI-Resilient Biosecurity with the Paraphrase Project, explores how researchers uncovered an overlooked vulnerability in AI-assisted protein design, and how that discovery opened a wider discussion about safety, collaboration, and responsibility in scientific research.



Among the key points discussed in the episode:

  • The Paraphrase Project’s investigation into how AI systems could be manipulated to produce harmful biological designs through indirect prompts.

  • The process behind uncovering this risk and the team’s decision to responsibly disclose their findings.

  • How partnerships across academia, industry, and government played a role in evaluating and mitigating the potential threat.

  • The importance of balancing open scientific progress with protective guardrails for emerging AI tools.

  • Broader reflections on what “responsibility” means when innovation has implications beyond its intended purpose.



The Paraphrase Project’s findings highlight how even well-intentioned advances can surface new questions about how science and technology intersect. It’s not the first time progress has run alongside risk, but this case offers a real example of how awareness and accountability can shape a more thoughtful approach to innovation. By addressing the issue publicly, the researchers set an example for how transparency can help the scientific community strengthen, not hinder, trust.


It also points to the growing need for cooperation among different sectors. When discoveries like these cross disciplines, the solutions often require shared expertise and open communication. The collaboration behind the Paraphrase Project shows that biosecurity isn’t just a research concern; it’s a collective effort involving ethical, technical, and social awareness.


As AI continues to expand its role in science, these conversations remind us that progress isn’t just measured by what technology can do, but by how thoughtfully it’s applied. Responsible innovation doesn’t slow discovery; it ensures that each step forward considers the broader implications for safety, integrity, and public trust.



Read the article and transcript here.

Watch it on YouTube here.

Listen on Spotify here.

Listen on Apple Podcast here.

OTHER INTERESTING AI HIGHLIGHTS:


The Promise and Peril of Real-Time AI Translation

/Daniel Seifert, Writer, on BBC


Apple’s new live translation feature is redefining what global communication could look like. Built into the AirPods Pro 3, it allows users to hear real-time translations directly in their ears while viewing text transcripts on their iPhones. This innovation could open doors to friction-free travel and instant multilingual interaction — but it also raises questions about what might be lost when machines speak for us. From travel to aviation, education to culture, the world may soon find itself rethinking how it connects — and why we still learn languages at all.



Read more here.


Apps Arrive in ChatGPT With Tools for Developers

/OpenAI


OpenAI just launched a new generation of apps inside ChatGPT — and a fresh SDK for developers to build them. Users can now chat directly with apps like Spotify, Canva, Coursera, and Zillow, seamlessly blending AI interaction with real-world tools. For developers, the new Apps SDK opens the door to over 800 million users and supports rich, conversational experiences built on the Model Context Protocol. This release marks a pivotal shift for ChatGPT — evolving from a chat assistant into a full ecosystem of interactive AI-powered apps.



Read more here.

SOME AI TOOLS TO TRY OUT:


  • Kerns – Transforms books and links into smart, self-updating summaries.

  • Julius – Connect data, ask in plain English, and get instant insights and charts.

  • Bazaar – Instantly turn screenshots into polished software demo videos.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Copyright Wowza, inc 2025
bottom of page