top of page
Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.



Here's another crazy day in AI:

  • Tyler Cowen on the upside of workplace anxiety

  • OpenAI launches AI tools for healthcare

  • Google is rolling out an AI Inbox for Gmail

  • Some AI tools to try out


🎧 Listen to a quick breakdown of today’s stories.

Audio cover
Why Change Feels So DemoralizingAnother Crazy Day In AI: The Podcast

TODAY'S FEATURED ITEM: The Uncomfortable Truth About Innovation

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


Why does real progress so often arrive wrapped in frustration and discomfort?


In a recent episode of WorkLab from Microsoft, host Molly Wood speaks with Tyler Cowen, economist, professor at George Mason University, and co-author of the Marginal Revolution blog. Their conversation explores why AI adoption feels so challenging for employees, organizations, and educational institutions—and why that resistance might be exactly what we should expect during genuine transformation. Cowen, who predicted AI would pull us out of economic stagnation back in 2011, offers a counterintuitive thesis: the more unhappy and disoriented people feel, the better we're actually doing, because it means real change is happening. The discussion also touches on the broader implications for the future of work—why legacy institutions will struggle to adapt, what skills will matter most in an AI-driven economy, and how startups rather than established players will likely lead the transformation ahead.


Points that shape the discussion

  • Frustration and confusion often appear alongside meaningful technological change

  • Uneven adoption can create new gaps between workers and organizations

  • Longstanding institutions tend to adjust more slowly than newer organizations

  • Foundational skills such as writing, numeracy, and judgment remain essential

  • Personal relationships and credibility continue to influence opportunity

  • Public sentiment can turn negative even as long-term capabilities expand



Cowen's perspective comes from watching chess computers evolve from poor players in the 1970s to world champions by 1997. That experience showed him something many people missed: if machines could master a game requiring deep intuition, they'd eventually handle far more complex tasks than anyone expected. Throughout the episode, he's comfortable saying "we don't know yet" about where new jobs will appear or how quickly different sectors will transform. He views this as a process that will unfold over decades, creating opportunities for some while requiring difficult adjustments from others.


The discussion gets into questions many workplaces are facing right now. How do organizations help employees stay confident when AI systems can sometimes outperform them? What should universities teach when their own faculty haven't worked in AI-integrated environments? Cowen points out that new companies might have an easier time here because they can hire people who already expect to work alongside AI, while established institutions face the messier challenge of helping long-term employees reimagine their roles. He mentions potential benefits like dramatically extended lifespans and unprecedented access to learning, while acknowledging the real difficulties ahead—especially for people who've spent years building expertise in areas that may become less central to their fields.



Check it out here here.

Listen on Apple Podcasts here.

Listen on Spotify here.

OTHER INTERESTING AI HIGHLIGHTS:


OpenAI Launches AI Tools for Healthcare

/OpenAI


OpenAI has launched OpenAI for Healthcare, introducing enterprise-grade AI tools designed specifically for regulated clinical environments. At the center is ChatGPT for Healthcare, built to support evidence-based clinical reasoning, reduce administrative workload, and integrate with institutional policies while supporting HIPAA compliance. The offering reflects a broader push to move AI from isolated clinician use into secure, organization-wide adoption. As healthcare systems face rising demand and clinician burnout, OpenAI is positioning AI as infrastructure rather than experimentation.




Read more here.


Google Is Rolling Out an AI Inbox for Gmail

/Blake Barnes, (VP Product, Gmail), on Google Blogs – The Keyword


Gmail is entering what Google calls the “Gemini era,” adding AI features designed to help users manage growing inbox overload. New tools like AI Overviews summarize long email threads, while users can now ask their inbox direct questions in natural language and receive concise answers. Gmail is also introducing an AI-powered Inbox that prioritizes critical messages and to-dos, acting as a personalized briefing rather than a static message list. Together, these updates signal a shift toward email as an active assistant, not just a communication tool.




Check it out here.

SOME AI TOOLS TO TRY OUT:


  • Amie – Turn meeting notes into actionable summaries and automated workflows.

  • Design Arena – Run design battles and vote on winners to see what real users prefer.

  • Liminary – Save and recall knowledge with an AI-powered memory that works in context.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


The holidays are here—or at least the festive chaos—and AI isn’t slowing down for anyone.


Cybersecurity leaders looked back on 2025 and noticed something odd: most big risks came from the mundane, everyday systems we take for granted. Automation and autonomous agents stretched trust to its limits.


Meanwhile, a logistics powerhouse is helping its frontline teams become confident AI users, showing how tech can support, not replace, people.


And Google is giving everyone a better handle on AI-generated media, now letting users spot what’s machine-made!

Here’s to a week of clarity, curiosity, and just enough holiday cheer to keep us sane.


Here's another crazy day in AI:

  • 2025 cybersecurity lessons and what comes next

  • How FedEx is preparing workers for AI

  • Gemini can now verify AI videos

  • Some AI tools to try out

TODAY'S FEATURED ITEM: Cybersecurity's Year in Review

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


Is your organization prepared for the cyber threats hiding in plain sight?


The defining cybersecurity stories of 2025 came from a mix of familiar weaknesses and newly emerging risks. In a year-end discussion hosted by Matt Kosinski and Patrick Austin on IBM’s Security Intelligence Podcast, security leaders looked back on the incidents and patterns that shaped the past year. Rather than centering on one headline-grabbing breach, the conversation examined how everyday systems, workflows, and assumptions became recurring points of exposure.


The discussion covered developments such as ClickFix scams, supply chain compromises, vibecoding risks, and the growing presence of autonomous agents operating with limited oversight. Many of these issues did not introduce entirely new concepts, but they did expose how scale, speed, and complexity made long-standing security challenges harder to manage. Throughout the conversations, one theme kept resurfacing: the difficulty of knowing where trust should begin, end, or be continuously verified.




Points that stood out from the discussion

  • Trust repeatedly surfaced as a weak point across software dependencies, user access, and delegated systems.

  • Vibecoding increased development efficiency while making it harder to see how code is produced and reviewed.

  • ClickFix scams gained traction by relying on users to carry out actions that appeared legitimate.

  • Supply chain compromises continued to affect organizations indirectly through widely used platforms and tools.

  • Shadow agents and unauthorized systems expanded exposure without clear visibility or ownership.

  • Session hijacking raised concerns about how long access should remain valid once granted.

  • Gaps in governance and oversight became more visible as tools evolved faster than internal policies.




What made 2025 particularly challenging wasn't necessarily that attack methods were entirely new, but rather that familiar problems became harder to spot and contain. The experts noted how breaches often happened in places people thought were safe—official browser stores, AI-generated code, authenticated sessions. It turns out that assumptions about what could be trusted without regular verification ended up being costly for many organizations. The conversation also highlighted how security concerns now touch more people than ever before. Developers became targets because of their access and reputation. Employees across departments started using AI tools without fully understanding what permissions they were granting. These patterns suggest that technology alone won't solve the problem if people don't understand what's happening in the background.


Looking at what's coming, the discussion touched on several areas worth paying attention to. Quantum computing is still developing but could eventually disrupt how encryption works. Session security might need rethinking since static authentication keeps getting exploited. Organizations will probably need better ways to see what their AI agents are actually doing, especially as these tools become more autonomous. And the question that kept coming up throughout the entire episode—how do we decide what and who to trust—doesn't have easy answers. But it's a question that will likely define how security gets approached as the lines between human actions and machine actions continue to blur.




Watch on YouTube here.

Listen on Apple Podcasts here.

Listen on Spotify here.

OTHER INTERESTING AI HIGHLIGHTS:


How FedEx Is Preparing Workers for AI

/Joe McKendrick, (Senior Contributor), on Forbes


FedEx is rolling out large-scale AI skills training for its frontline workforce, aiming to equip up to 500,000 employees with practical AI fluency. The program, developed with Accenture, focuses less on turning workers into technologists and more on helping them become informed, confident AI users in their daily roles. From route optimization to customer service and fraud detection, the training reflects how deeply AI is already embedded in FedEx’s operations. The broader goal is cultural: positioning AI as a collaborative tool that supports—not replaces—frontline workers, while keeping skills adaptable as the technology evolves.



Read more here.


Gemini Can Now Verify AI Videos

/Google Blogs - The Keyword


Google is adding a new layer of transparency to AI-generated media by allowing users to verify AI-created or AI-edited videos directly in the Gemini app. By scanning for Google’s SynthID watermark across both audio and visuals, Gemini can identify which parts of a video were generated using Google AI—and explain where those elements appear. The feature works with short video uploads and is available globally across supported languages. As AI-generated media becomes harder to spot, tools like this aim to give users clearer context rather than leaving detection to guesswork.




Check it out here.

SOME AI TOOLS TO TRY OUT:


  • LearnFlux – Turn study materials into interactive flashcards, quizzes, and practice tests.

  • Intercom – AI that answers support chats and routes conversations automatically.

  • Incredible – Create autonomous AI agents that handle repetitive tasks for you, 24/7.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Thursday evenings are supposed to be quiet but AI updates say otherwise.


Researchers at Harvard tested AI tools for proactive mental health, giving students gentle nudges that strengthen emotional well-being before stress piles up. Early results suggest prevention can be as powerful as intervention.


Meanwhile, incoming students at a university in Indiana will soon face a new AI learning requirement built into their courses, ensuring graduates are ready for a tech-driven workforce.


And for anyone dabbling in visuals, ChatGPT Images now makes detailed edits faster and smoother—perfect for playing around with ideas over the weekend.


Here's another crazy day in AI:

  • The case for accessible preventative mental health tools

  • AI education becomes mandatory at Purdue

  • OpenAI upgrades image generation in ChatGPT

  • Some AI tools to try out

TODAY'S FEATURED ITEM: Technology for Preventative Mental Health

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


What if support for mental well-being didn’t start at a breaking point, but quietly strengthened people before things got hard?


College students today face unprecedented mental health challenges. While 76% report needing emotional support, only 38% actually receive it. The barriers are familiar: time constraints, limited access, stigma, and the feeling that seeking help means admitting defeat. A new Harvard Business School working paper from November 2025 examines whether AI technology might help address this gap differently. Researchers Julie Y. A. Cachia and her colleagues conducted a six-week randomized controlled trial with 486 undergraduates across three U.S. institutions, testing an app called Flourish that uses an AI companion named Sunnie to deliver brief, personalized well-being check-ins and activities. Rather than treating symptoms after they emerge, the approach focuses on building emotional and social strengths as a foundation.


The study followed students over the course of an academic term, including naturally stressful periods such as midterms. Participants were randomly assigned either to use the app or to continue with existing campus supports. Researchers measured changes in emotional, social, and overall well-being at multiple points, looking not only for improvements but also for whether the intervention helped protect students from the typical declines that can happen during a busy semester.



Cachia, Julie Y.A., Xuan Zhao, John Hunter, Delancy Wu, Eta Lin, and Julian De Freitas. "AI for Proactive Mental Health: A Multi-Institutional, Longitudinal, Randomized Controlled Trial." Harvard Business School Working Paper, No. 26-030, November 2025.
Cachia, Julie Y.A., Xuan Zhao, John Hunter, Delancy Wu, Eta Lin, and Julian De Freitas. "AI for Proactive Mental Health: A Multi-Institutional, Longitudinal, Randomized Controlled Trial." Harvard Business School Working Paper, No. 26-030, November 2025.


What showed up in the data:

  • Students with app access reported notably higher positive affect by weeks 4 and 6, particularly feeling calmer and maintaining well-being while the control group's levels dropped

  • Loneliness decreased more among app users than controls, and their sense of belonging and connection to campus grew stronger over time

  • Mindfulness and flourishing stayed steady in the treatment group but declined in the control group as the semester progressed

  • Participants used the app roughly 3.5 times per week on average, exceeding the twice-weekly minimum researchers suggested

  • Sessions combined emotional check-ins with personalized recommendations for practices like gratitude exercises, journaling, or connecting with others offline

  • Clinical symptoms including depression, anxiety, and stress remained largely unchanged, suggesting the intervention works differently than treatments targeting diagnosed conditions

  • Results were consistent across different student backgrounds and baseline mental health, though the sample consisted of generally well-functioning individuals


Cachia, Julie Y.A., Xuan Zhao, John Hunter, Delancy Wu, Eta Lin, and Julian De Freitas. "AI for Proactive Mental Health: A Multi-Institutional, Longitudinal, Randomized Controlled Trial." Harvard Business School Working Paper, No. 26-030, November 2025.
Cachia, Julie Y.A., Xuan Zhao, John Hunter, Delancy Wu, Eta Lin, and Julian De Freitas. "AI for Proactive Mental Health: A Multi-Institutional, Longitudinal, Randomized Controlled Trial." Harvard Business School Working Paper, No. 26-030, November 2025.


The research points to a role for technology that often gets overlooked in mental health discussions—not replacing professional care or managing emergencies, but helping people maintain emotional stability during normal but stressful periods. Clinical symptoms like depression and anxiety didn't change much, which makes sense considering participants were managing everyday college pressures rather than experiencing mental health crises. What did change were measures like positive emotions, sense of connection, and ability to stay grounded. Students kept returning to the app at rates higher than required, suggesting they found something useful in it. Still, six weeks only tells us so much. We don't know if they'd keep using it after the study ended or whether the benefits would last without continued engagement.


There's a lot this study doesn't answer. Would similar results appear in non-college populations? What about people dealing with more significant mental health challenges? Could easy access to preventative tools sometimes discourage people from pursuing clinical help when they actually need it? The trial focused on three institutions over a short timeframe, so applying these findings more broadly requires careful consideration. Questions about privacy, sustained use over months or years, and how tools like this fit alongside traditional mental health services remain open. The researchers built the app around certain principles—keeping activities grounded in psychology research, prompting real-world actions, emphasizing human relationships over AI interaction—but whether those design choices hold up across diverse situations and user needs involves complexities that extend beyond what a single study can capture. What we're looking at here is evidence that structured, accessible support produced measurable outcomes for this specific group under these particular conditions. It's useful information, but it sits within a much broader and more complicated conversation about technology's place in mental health care, one where definitive answers are still being worked out.




Read the full paper here.

OTHER INTERESTING AI HIGHLIGHTS:


AI Education Becomes Mandatory at Purdue

/Samantha Horton, All Things Considered Newscaster and Reporter, on WFYI


Purdue University is introducing a new AI learning requirement for incoming students, aiming to better prepare graduates for an AI-driven workforce. Rather than adding new classes, the university will integrate AI skills, ethics, and critical thinking directly into existing curricula, tailored to each major. University leaders say the goal is to help students understand both the power and limits of AI as the technology evolves. The requirement will begin with students entering in fall 2026, with plans to expand lessons learned to regional campuses.



Read more here.


OpenAI Upgrades Image Generation in ChatGPT

/OpenAI, Product Release


OpenAI has released a new version of ChatGPT Images, powered by its most advanced image generation model yet. The update focuses on more precise edits, stronger instruction-following, and faster image generation—up to four times quicker than before. Users can now make detailed changes while preserving key elements like facial likeness, lighting, and composition. Alongside the model upgrade, OpenAI introduced a dedicated Images space in ChatGPT to make creative exploration faster and more intuitive.




Check it out here.

SOME AI TOOLS TO TRY OUT:


  • Trullion – Automate financial workflows for accounting and audit teams with AI.

  • Skej – AI assistant that schedules, reschedules, and follows up on meetings via email.

  • Bitrig – Turn text prompts into a working Swift app directly on your phone.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Copyright Wowza, inc 2025
bottom of page