top of page

Another Crazy Day in AI: How Security Challenges Evolved This Year

Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


The holidays are here—or at least the festive chaos—and AI isn’t slowing down for anyone.


Cybersecurity leaders looked back on 2025 and noticed something odd: most big risks came from the mundane, everyday systems we take for granted. Automation and autonomous agents stretched trust to its limits.


Meanwhile, a logistics powerhouse is helping its frontline teams become confident AI users, showing how tech can support, not replace, people.


And Google is giving everyone a better handle on AI-generated media, now letting users spot what’s machine-made!

Here’s to a week of clarity, curiosity, and just enough holiday cheer to keep us sane.


Here's another crazy day in AI:

  • 2025 cybersecurity lessons and what comes next

  • How FedEx is preparing workers for AI

  • Gemini can now verify AI videos

  • Some AI tools to try out

TODAY'S FEATURED ITEM: Cybersecurity's Year in Review

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


Is your organization prepared for the cyber threats hiding in plain sight?


The defining cybersecurity stories of 2025 came from a mix of familiar weaknesses and newly emerging risks. In a year-end discussion hosted by Matt Kosinski and Patrick Austin on IBM’s Security Intelligence Podcast, security leaders looked back on the incidents and patterns that shaped the past year. Rather than centering on one headline-grabbing breach, the conversation examined how everyday systems, workflows, and assumptions became recurring points of exposure.


The discussion covered developments such as ClickFix scams, supply chain compromises, vibecoding risks, and the growing presence of autonomous agents operating with limited oversight. Many of these issues did not introduce entirely new concepts, but they did expose how scale, speed, and complexity made long-standing security challenges harder to manage. Throughout the conversations, one theme kept resurfacing: the difficulty of knowing where trust should begin, end, or be continuously verified.




Points that stood out from the discussion

  • Trust repeatedly surfaced as a weak point across software dependencies, user access, and delegated systems.

  • Vibecoding increased development efficiency while making it harder to see how code is produced and reviewed.

  • ClickFix scams gained traction by relying on users to carry out actions that appeared legitimate.

  • Supply chain compromises continued to affect organizations indirectly through widely used platforms and tools.

  • Shadow agents and unauthorized systems expanded exposure without clear visibility or ownership.

  • Session hijacking raised concerns about how long access should remain valid once granted.

  • Gaps in governance and oversight became more visible as tools evolved faster than internal policies.




What made 2025 particularly challenging wasn't necessarily that attack methods were entirely new, but rather that familiar problems became harder to spot and contain. The experts noted how breaches often happened in places people thought were safe—official browser stores, AI-generated code, authenticated sessions. It turns out that assumptions about what could be trusted without regular verification ended up being costly for many organizations. The conversation also highlighted how security concerns now touch more people than ever before. Developers became targets because of their access and reputation. Employees across departments started using AI tools without fully understanding what permissions they were granting. These patterns suggest that technology alone won't solve the problem if people don't understand what's happening in the background.


Looking at what's coming, the discussion touched on several areas worth paying attention to. Quantum computing is still developing but could eventually disrupt how encryption works. Session security might need rethinking since static authentication keeps getting exploited. Organizations will probably need better ways to see what their AI agents are actually doing, especially as these tools become more autonomous. And the question that kept coming up throughout the entire episode—how do we decide what and who to trust—doesn't have easy answers. But it's a question that will likely define how security gets approached as the lines between human actions and machine actions continue to blur.




Watch on YouTube here.

Listen on Apple Podcasts here.

Listen on Spotify here.

OTHER INTERESTING AI HIGHLIGHTS:


How FedEx Is Preparing Workers for AI

/Joe McKendrick, (Senior Contributor), on Forbes


FedEx is rolling out large-scale AI skills training for its frontline workforce, aiming to equip up to 500,000 employees with practical AI fluency. The program, developed with Accenture, focuses less on turning workers into technologists and more on helping them become informed, confident AI users in their daily roles. From route optimization to customer service and fraud detection, the training reflects how deeply AI is already embedded in FedEx’s operations. The broader goal is cultural: positioning AI as a collaborative tool that supports—not replaces—frontline workers, while keeping skills adaptable as the technology evolves.



Read more here.


Gemini Can Now Verify AI Videos

/Google Blogs - The Keyword


Google is adding a new layer of transparency to AI-generated media by allowing users to verify AI-created or AI-edited videos directly in the Gemini app. By scanning for Google’s SynthID watermark across both audio and visuals, Gemini can identify which parts of a video were generated using Google AI—and explain where those elements appear. The feature works with short video uploads and is available globally across supported languages. As AI-generated media becomes harder to spot, tools like this aim to give users clearer context rather than leaving detection to guesswork.




Check it out here.

SOME AI TOOLS TO TRY OUT:


  • LearnFlux – Turn study materials into interactive flashcards, quizzes, and practice tests.

  • Intercom – AI that answers support chats and routes conversations automatically.

  • Incredible – Create autonomous AI agents that handle repetitive tasks for you, 24/7.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Comments


Subscribe to Another Crazy Day in AI​

Catch us almost every day—almost! 😉

Thanks for signing up!

Copyright Wowza, inc 2025
bottom of page