top of page
Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Tuesday night tech talk—unpacking the latest in AI! 💡


Managing AI agents is no longer optional—Workday’s new Agent System of Record is setting the stage for structured AI integration in the workplace. 💼


Meanwhile, Thomson Reuters secures an early court win in the ongoing battle over AI, copyright, and fair use. And with Google’s AI-powered Super Bowl ad, this year’s game might have an extra emotional punch! 🏈


Here's another crazy day in AI:

  • Workday introduces a system for AI oversight

  • Thomson Reuters wins key lawsuit over AI and copyright

  • Google’s AI Super Bowl ad celebrates fatherhood and ambition

  • Some AI tools to try out


TODAY'S FEATURED ITEM: Workday’s Bold Move to Manage Your AI Agents


Image Credit: Wowza (created with Ideogram)

Image Credit: Wowza (created with Ideogram, edited with Canva)


What if your company's AI assistants had their own HR system? That's exactly what Workday is building towards.


As AI-powered assistants become a bigger part of business operations, companies are facing a new challenge—how to manage them effectively. Workday is introducing a solution: the Agent System of Record, a platform designed to track, control, and integrate AI agents across an organization. Josh Bersin, a leading industry analyst, breaks down what this means and why businesses might need to rethink their approach to AI governance.


Source: Workday
Source: Workday

What to know about Workday’s AI Agent Management System:

  • AI agents get a structured system – Similar to HR systems for employees, this platform helps businesses track AI assistants’ roles, access, and interactions.

  • Stronger oversight and security – Clear governance ensures AI agents operate within defined parameters, minimizing risks and maintaining compliance.

  • Seamless enterprise integration – Workday aims to bring AI assistants into its broader ecosystem, enabling smoother collaboration between AI and human workers.

  • Customization for business needs – With Workday Extend, companies can adapt AI agents to fit their specific workflows and industry requirements.

  • A long-term approach to AI management – As AI adoption grows, businesses need a structured way to scale and govern their use of intelligent agents.


Source: Workday
Source: Workday

These systems are no longer just tools that automate tasks—they’re becoming embedded in decision-making, customer interactions, and daily operations. Without proper oversight, businesses risk inefficiencies, compliance issues, and security vulnerabilities. Workday’s approach suggests that AI governance shouldn’t be an afterthought—it needs a structured foundation from the start.


At the same time, this raises important discussions about who should oversee AI in an organization. Should IT teams take the lead? Should AI governance be a shared responsibility across departments? While there’s no one-size-fits-all answer, it's clear that companies are looking for structured ways to integrate AI into their operations without losing control. Workday’s Agent System of Record is an early glimpse into what AI oversight could look like in the future—one where businesses don’t just use AI, but actively manage and refine it to work alongside their teams.





Read the full article here.

Listen to the podcast here.

OTHER INTERESTING AI HIGHLIGHTS:


Thomson Reuters Wins Key Lawsuit Over AI And Copyright

/Richard Lawler on The Verge


A U.S. court has ruled in favor of Thomson Reuters in a copyright lawsuit against Ross Intelligence, a legal AI startup accused of using Westlaw’s proprietary content without permission. The judge dismissed Ross’s fair use defense, stating that the AI tool improperly relied on copyrighted legal summaries to build its own legal research system. This ruling sets a precedent for other AI-related copyright cases, including those against OpenAI and Microsoft. While Ross Intelligence shut down in 2021, the case highlights ongoing legal battles over how AI companies use copyrighted data in training their models.



Read more here.


Google’s AI Super Bowl Ad Celebrates Fatherhood And Ambition

/Mark Giannotto on USA TODAY


Google’s Super Bowl 2025 ad showcases its latest AI innovation, Gemini Live, integrated into the Pixel 9. The emotional commercial follows a father preparing for a job interview using Gemini’s AI-powered assistance, while flashbacks depict his journey raising his daughter—from childhood to college. The ad’s heartfelt storytelling underscores AI’s role in supporting personal and professional growth. As AI continues to dominate Super Bowl ad themes, Google joins Meta, OpenAI, and Salesforce in spotlighting AI-powered products during this year’s game.



Read more here.


SOME AI TOOLS TO TRY OUT:


  • Oasis by BeforeSunset - AI-powered workspace for the perfect work or study ambiance.

  • Reef - Analyzes spreadsheets and explains insights with charts and voice.

  • Bubble AI - Quickly build AI-powered apps.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is now on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Monday night reset—let's ease into the week with AI insights! 🌙


A study from Microsoft and Carnegie Mellon University suggests that heavy reliance on AI in professional settings may diminish critical thinking abilities, raising important discussions about AI’s impact on our intellectual skills. 🧠


On another note, don’t miss the five-point checklist designed to help CIOs evaluate key factors before implementing an AI Agent Platform. Meanwhile, OpenAI made headlines with its Super Bowl commercial, presenting ChatGPT as a transformative technology on par with fire, airplanes, and television. 🚀


Let’s see what this week has in store for us in the world of AI! 💡


Here's another crazy day in AI:

  • When AI takes over our thinking

  • Opinion: 5-point checklist to consider before adopting an AI agent platform

  • OpenAI makes its Super Bowl debut with a bold AI message

  • Some AI tools to try out


TODAY'S FEATURED ITEM: AI and the Risk of Over-Reliance


Image Credit: Wowza (created with Ideogram)

Image Credit: Wowza (created with Ideogram)


Are we unknowingly trading our critical thinking skills for AI-powered convenience?


A new study from Microsoft and Carnegie Mellon University suggests that as people rely more on AI at work, they engage in less critical thinking—potentially weakening their cognitive skills over time. Co-founder and journalist of 404 Media, Emanuel Maiberg, breaks down the research, which raises important questions about AI’s impact on human intelligence and problem-solving.


The study examines how AI influences decision-making, particularly when people become overly dependent on it. While AI can boost efficiency, the ease of outsourcing thinking to machines might come at a cost.


Lee et al. The Impact of Generative AI on Critical Thinking. CHI ’25, Yokohama, Japan.
Lee et al. The Impact of Generative AI on Critical Thinking. CHI ’25, Yokohama, Japan.

What the Research Shows:

  • Frequent AI users are more likely to accept its answers without verifying accuracy.

  • People who maintain a skeptical approach tend to evaluate AI-generated suggestions more critically.

  • Overreliance on AI can lead to repetitive, predictable responses, limiting original thought.

  • Time constraints push users to trust AI without questioning its reasoning.

  • When stakes are higher, people are more inclined to challenge AI’s output.

  • Researchers suggest AI tools should encourage critical engagement rather than passive acceptance.


Lee et al. The Impact of Generative AI on Critical Thinking. CHI ’25, Yokohama, Japan.
Lee et al. The Impact of Generative AI on Critical Thinking. CHI ’25, Yokohama, Japan.

These findings don’t suggest AI is inherently harmful, but they do highlight an important shift in the way we process information. Unlike past technological advancements—like calculators or spell-checkers—AI doesn’t just assist with tasks; it generates content, recommendations, and solutions that people may adopt without a second thought. This has implications for workplaces, education, and even everyday decision-making.


To address this, researchers suggest that AI should be designed to prompt users to think critically. Features like explanations, alternative perspectives, or built-in fact-checking could help keep people engaged rather than simply accepting what AI provides. Instead of replacing human judgment, AI should complement and challenge it, ensuring that convenience doesn’t come at the cost of our ability to think independently.




Read the full article here.

Read the full paper here.

OTHER INTERESTING AI HIGHLIGHTS:


Opinion: 5-point Checklist to Consider Before Adopting an AI Agent Platform

/Nicholas D. Evans, CIO


With AI agents becoming integral to automation and productivity, selecting the right AI agent platform is more critical than ever. This five-point checklist helps CIOs assess key factors such as ease of use, API documentation, professional support, system uptime, and the vendor’s product roadmap. As AI tools evolve at an unprecedented pace, businesses must ensure their chosen platform is flexible, scalable, and well-supported to stay ahead of innovation.



Read more here.


OpenAI Makes its Super Bowl Debut With a Bold AI Message

/Trishla Ostwal, Adweek


For the first time, OpenAI debuted a Super Bowl commercial, framing ChatGPT as the next major technological leap alongside fire, airplanes, and television. The ad, “The Intelligence Age,” highlighted how AI can assist in daily life, from summarizing articles to launching businesses. While OpenAI’s Sora video model was used in pre-visualization, the final ad relied on traditional animation and human creativity. This move marks OpenAI’s push into mainstream marketing, aiming to position ChatGPT as an essential everyday tool.



Read more here.


SOME AI TOOLS TO TRY OUT:


  • Spiral - Converts long content, like podcasts or articles, into social posts that reflect your voice.

  • Seamless.AI - B2B lead gen tool connecting you to details of 1.3 billion decision-makers.

  • Lex - Enhances your writing by suggesting improvements while maintaining your unique style.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is now on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Friday night wind-down—AI may be working overtime, but it’s time for us to relax! 😌


A team of 117 global experts from 50 countries has developed the FUTURE-AI framework—international guidelines designed to ensure that AI in healthcare is reliable, ethical, and practical. This framework aims to build trust in AI-powered medical tools and support their safe integration into patient care. 🏥


On another note, a major workforce management company plans to lay off employees to pivot towards AI investments. In another study, researchers found that AI systems often reflect a narrow set of human values, prioritizing information and utility while overlooking critical areas like empathy, justice, and civic responsibility. ⚖️


Here's another crazy day in AI:

  • A framework for responsible AI in medicine

  • Workday lays off 1,750 employees to invest in AI

  • Study finds AI training data has major ethical blind spots

  • Some AI tools to try out


TODAY'S FEATURED ITEM: FUTURE-AI and the Path to Safer Medical AI


Image Credit: Wowza (created with Ideogram)

Image Credit: Wowza (created with Ideogram, edited with Canva)


How can we make AI in healthcare something doctors and patients can truly trust?


AI has the potential to transform healthcare, but real-world adoption remains slow due to concerns about safety, bias, and transparency. A team of 117 experts from 50 countries has come together to tackle this challenge, creating the FUTURE-AI framework—an international set of guidelines aimed at making AI in healthcare more reliable, ethical, and practical. Published in The BMJ, this consensus-based framework covers the entire AI lifecycle, from development to deployment, ensuring that AI tools meet both technical and ethical standards.


BMJ 2025;388:e081554
BMJ 2025;388:e081554

What stands out about FUTURE-AI:

  • It defines six core principles for trustworthy AI: fairness, universality, traceability, usability, robustness, and explainability.

  • The framework includes 30 best practices that cover the entire AI lifecycle, from design and validation to deployment and monitoring.

  • It addresses major concerns like bias, data security, and patient safety, making AI more reliable in real-world healthcare.

  • The guidelines were developed through international collaboration, bringing together AI researchers, clinicians, ethicists, and policymakers.

  • FUTURE-AI is designed to evolve with new technologies and challenges, ensuring it stays relevant as AI advances.


BMJ 2025;388:e081554
BMJ 2025;388:e081554

Guidelines like these are essential because AI in medicine isn’t just about innovation—it’s about trust. Patients and healthcare professionals need to feel confident that AI-driven decisions are fair, reliable, and backed by strong evidence. FUTURE-AI aims to bridge the gap between research and real-world application by providing clear, actionable recommendations that developers and healthcare organizations can follow.


But creating a framework is just the beginning. The real test will be in how these principles are adopted and enforced. It will take ongoing collaboration among governments, researchers, hospitals, and technology companies to ensure AI tools meet high ethical and clinical standards. As AI continues to evolve, so will the challenges—but efforts like FUTURE-AI help lay the groundwork for a more transparent and responsible future in healthcare.




Read the full research here.

OTHER INTERESTING AI HIGHLIGHTS:


Workday Lays Off 1,750 Employees to Invest in AI

/Emma Burleigh on Fortune


Workday is laying off 1,750 employees (8.5% of its workforce) as it shifts investment toward AI-driven innovation, a trend seen across multiple industries. The move follows similar cuts at Salesforce and Klarna, where AI has replaced customer service roles and streamlined operations. With major business leaders, including Salesforce’s Marc Benioff, predicting that AI will fundamentally alter the workforce, a World Economic Forum report found that 41% of executives expect job cuts due to AI within five years. As AI reshapes industries, businesses are navigating the balance between efficiency gains and the human cost of automation.



Read more here.


Study Finds AI Training Data Has Major Ethical Blind Spots

/Ike Obi, Ph.D. student in Computer and Information Technology, Purdue University, on The Conversation


A new study from Purdue University reveals that AI systems are trained with a narrow set of human values, focusing primarily on information and utility while neglecting areas like empathy, justice, and civic responsibility. Researchers analyzed datasets from leading AI companies and found that wisdom and knowledge were the most emphasized values, while justice and human rights were among the least represented. This imbalance has major implications as AI becomes more embedded in sectors like law, healthcare, and social media, raising concerns about how well these systems reflect societal needs. The findings could help companies develop more ethically balanced AI training models moving forward.



Read more here.

Table 1: Results from the qualitative annotation of 6,501 RLHF preferences showed that Information Seeking was the most prominent human value, while Justice and Rights were the least represented value. | SOURCE: Obi et al, Value Imprint: A Technique for Auditing the Human Values Embedded in RLHF Datasets
Table 1: Results from the qualitative annotation of 6,501 RLHF preferences showed that Information Seeking was the most prominent human value, while Justice and Rights were the least represented value. | SOURCE: Obi et al, Value Imprint: A Technique for Auditing the Human Values Embedded in RLHF Datasets

SOME AI TOOLS TO TRY OUT:


  • CubeOne - AI handles slides and scripts while you focus on presenting.

  • Ask Concierge - Control work apps via chat—search emails, create tickets, and more.

  • Chaindesk - Build AI chatbots with your data—no coding needed.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is now on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Copyright Wowza, inc 2025
bottom of page