top of page
Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.



Google’s DeepMind and Research teams just gave their diagnostic chatbot a big brain boost. The new version isn’t just conversational, it’s multimodal. It reads your medical scans and documents while it chats with you—adding visual intelligence to its already strong language processing.


College kids are getting smarter (and not just from caffeine). Texas A&M is rolling with Perplexity to teach AI from the ground up.


And if you’d rather not rely on cloud tools, GPT4ALL lets you run LLMs from your own desktop. That’s enough thinking for one night. See you in your inbox again soon.


Here's another crazy day in AI:

  • A diagnostic chatbot with vision

  • Mays Business School launches AI partnership with Perplexity

  • Use your favorite AI models offline with GPT4ALL

  • Some AI tools to try out


TODAY'S FEATURED ITEM: What AMIE’s Update Means for Healthcare


A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


How might visual AI change the future of remote healthcare access?


Google DeepMind and Google Research have taken a major leap forward with their multimodal AMIE (Articulate Medical Intelligence Explorer), a research diagnostic assistant. This breakthrough, shared by Khaled Saab and Jan Freyberg on the Google Research Blog, introduces a new multimodal version of AMIE—capable not just of having diagnostic conversations but also of interpreting medical images and documents shared by patients. It highlights how conversational AI systems can now incorporate visual medical information during diagnostic discussions, potentially transforming the future of remote healthcare.


This new version of AMIE shows just how far AI has come in the field of healthcare. Not only can AMIE hold conversations and answer questions based on medical information, but it can also interpret and integrate visual data like medical images. This is a huge step forward, especially for remote consultations, where patients can now share images like X-rays or CT scans directly with the system. AMIE is able to assess this information and provide diagnostic assistance alongside the patient's description of symptoms or medical history.


Source: Google
Source: Google

Here are some of the key points discussed:

  • AMIE’s performance in medical conversations was rated higher than that of primary care physicians across 28 of 32 objective and subjective criteria in a blinded study with licensed professionals.

  • The new model was trained on a combination of publicly available data and simulated patient-doctor interactions created by clinicians.

  • AMIE’s reasoning is visible to users—it shows which parts of an image or document it’s referring to in its responses, making its diagnostic process more transparent.

  • It can reference prior parts of the conversation or shared images and documents to support its medical reasoning.

  • While AMIE is not a product and hasn’t been tested in real-world clinical settings, this research helps explore what might be possible for the future of AI-assisted healthcare.


Source: Google
Source: Google

As the boundaries between conversational AI and medical diagnostics continue to blur, this research offers a glimpse into how integrated tools might one day support physicians or even extend basic access in areas where healthcare professionals are scarce. Being able to process both dialogue and visual data means these systems can engage in more context-aware discussions—something that's especially useful when a patient can’t explain everything in words alone.


Of course, this work remains in the research stage. There are essential questions to resolve—around safety, equity, consent, and how these systems would operate alongside human clinicians in the real world. But it’s a meaningful development. It encourages a broader conversation about how multimodal AI might reshape the tools available in healthcare, not by replacing human judgment, but by enhancing how care can be delivered and understood—especially at a distance.




Read the full blog here. Read the paper here.

OTHER INTERESTING AI HIGHLIGHTS:


Mays Business School Launches AI Partnership with Perplexity

/David Swope, News Reporter, on The Battalion


Texas A&M’s Mays Business School has become the first academic institution to partner with Perplexity, granting students access to its advanced AI-powered platform, Enterprise Pro. The goal is to prepare students for a workforce increasingly shaped by artificial intelligence by giving them hands-on experience with cutting-edge tools. The initiative is part of a broader effort, including a new AI and Business minor and student competitions focused on AI applications. While some express concerns about over-reliance on AI, school leaders emphasize ethical use and AI literacy as essential skills for the future.



Read more here.


Use Your Favorite AI Models Offline with GPT4ALL

/Jack Wallen, Contributing Writer, on ZDNET


Want to run your favorite local AI models directly from your desktop? GPT4ALL is a user-friendly app that makes it easy to run open-source models like Llama, Mistral, and Orca on Linux, MacOS, and Windows. With support for model switching, GPU selection, and API serving, GPT4ALL offers a powerful way to use AI privately without relying on the cloud. Installation is quick for Ubuntu-based systems, and users can immediately begin experimenting with various models and workflows — from personal research to coding help.



Read more here.


SOME AI TOOLS TO TRY OUT:


  • Sans Writer – A clean, private space to write without distractions or metrics.

  • HiBird – AI meetings, translations, and messaging to help startups grow faster.

  • Suna by Kortix – A generalist AI agent that takes action on your behalf.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Time to swap out your work brain for your curious brain—here’s what’s new in AI.


In the latest Class Disrupted episode, Rebecca Winthrop from Brookings drops a smart idea—use “premortems” to spot what could go wrong before rolling out AI in schools.


Meanwhile, Reddit’s mad at researchers who let AI bots loose in a debate forum without telling users. Spoiler: it did not end well.


Also, Visa’s latest AI collab means your next shopping spree could be completely automated. Dangerous. By the time you read this, AI will probably have learned 10 new tricks.


Here's another crazy day in AI:

  • How to prevent AI from derailing education

  • Reddit may sue researchers over AI bot debate experiment

  • Visa introduces AI-driven solutions to transform commerce

  • Some AI tools to try out


TODAY'S FEATURED ITEM: Preventing Educational Harm


A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


What if we could prevent the most damaging effects of AI in schools before they even happen?


In the latest episode of Class Disrupted, hosts Diane Tavenner and Michael B. Horn sit down with Rebecca Winthrop, senior fellow at the Brookings Institution and co-author of The Disengaged Teen, to discuss her bold proposal: apply a "premortem" approach to AI in education. Rather than riding the wave of optimism, Winthrop advocates for anticipating and preparing for the worst-case scenarios AI could bring to classrooms—before they take root. The conversation explores the risks of critical thinking erosion, manipulation, and reduced socialization, while also examining how AI might catalyze a much-needed reinvention of the education system.


Here are some ideas and questions the episode brings to the surface:

  • There’s a lack of clarity about which learning tasks students need to experience themselves—and which AI can responsibly assist with.

  • Relying too heavily on AI could make it harder for students to build foundational thinking skills, especially through tasks like writing and argument-building.

  • The potential for manipulation, while not always visible, raises concerns about how students may be influenced—intentionally or not.

  • AI-mediated learning environments may limit students’ opportunities to develop social skills and interact meaningfully with peers and teachers.

  • The presence of AI in classrooms could accelerate a shift in how we define the purpose of school and learning.

  • A move from focusing on achievement to emphasizing agency could lead to more student-centered, choice-driven learning experiences.

  • Teachers may benefit from AI’s support with administrative work, but any long-term success will depend on thoughtful design and clear boundaries.



The conversation doesn't offer a blueprint or a prediction. Instead, it makes space for careful reflection on the kinds of decisions education leaders, communities, and policymakers are facing right now. AI is arriving quickly, but the question isn’t just how fast schools can catch up. It’s whether we’re asking the right questions about what AI should (or shouldn’t) be doing in learning environments—and who gets to decide.


By thinking through possible risks early—before problems become widespread—Winthrop's “premortem” mindset offers a way to approach AI with both caution and intention. It’s a reminder that technology doesn’t operate in a vacuum; it shapes and is shaped by the systems it enters. For education, that means there’s an opportunity—and a responsibility—to be more deliberate before rushing to adopt the next big thing.




Read the full transcript here. Listen on Apple Podcasts here.

Listen on Spotify here.

OTHER INTERESTING AI HIGHLIGHTS:


Reddit May Sue Researchers Over AI Bot Debate Experiment

/Victor Tangermann on Futurism


Reddit is considering legal action against researchers from the University of Zurich who deployed AI bots on the r/changemyview subreddit without users' consent. The experiment aimed to test if AI could influence opinions in online debates, but the bots adopted controversial personas and used personal post histories to respond. Reddit condemned the experiment as unethical and in violation of its policies, and the university has since backed away, promising stricter oversight going forward. The situation highlights the growing concerns around AI deception and consent in online spaces.



Read more here.


Visa Introduces AI-Driven Solutions to Transform Commerce

/BusinessWire Newsroom


Visa is ushering in a new era of commerce with its announcement of AI-powered payment solutions and partnerships at the Global Product Drop. The company revealed its Visa Intelligent Commerce initiative, which enables AI agents to handle browsing, purchasing, and managing transactions on behalf of consumers. With collaborations involving OpenAI, Anthropic, and others, Visa is expanding its network to support secure, AI-driven shopping experiences. New tools like Visa Pay, Visa Accept, and stablecoin-linked products are also part of the push to modernize global payments.



Read more here.

SOME AI TOOLS TO TRY OUT:


  • Ztalk.ai – Breaks language barriers in video calls with real-time AI translation.

  • Luna.ai – Finds leads and writes personalized B2B outreach emails.

  • Aqua – Dictation tools and advanced speech-to-text for everyday tasks.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Let’s catch you up—because AI news doesn’t stop, but you probably did for lunch.


Meta’s AI assistant just stepped out as a standalone app. Seamless, social, and powered by Llama 4—could it knock your current AI fave off the top spot?


Meanwhile, California’s testing AI to modernize government, from traffic flow to customer service. But lawmakers are asking: is it moving too fast to stay transparent?


And Google’s AI podcast tool just hit global mode with multilingual audio recaps. Your notes now talk back—in 50+ languages, no translator needed.


Here's another crazy day in AI:

  • Meta launches standalone assistant

  • California signs new AI agreements for government operations

  • NotebookLM’s audio overview gets multilingual upgrade

  • Some AI tools to try out


TODAY'S FEATURED ITEM: Meta AI Steps Out


A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


Why is Meta launching a standalone AI assistant now—and what makes it different?


Meta has rolled out its first Meta AI app, a standalone experience designed to bring its AI assistant closer to users—on their terms. While Meta AI has already been part of platforms like Instagram, WhatsApp, and Facebook, this new release moves it into its own space, giving users more control, more personalization, and a chance to interact in ways that feel natural, especially through voice.


This announcement introduces a new direction: one where your digital assistant doesn’t just respond to your prompts—it gets to know you. Built on Meta’s latest Llama 4 model, the app supports both text and voice interactions, includes creative tools like image editing and generation, and offers a Discover feed where users can explore how others are engaging with the AI.


Source: Meta
Source: Meta

What’s in the first version of Meta’s AI app:

  • Conversations via text or voice, designed to feel more fluid and responsive

  • Early use of full-duplex voice tech that lets users speak naturally, without waiting for turns

  • Access to creative tools like image generation, editing, and document help

  • A Discover section that showcases real prompts and remixes from other users

  • Syncs with Ray-Ban smart glasses for on-the-go interactions

  • Conversations can continue across web, app, and glasses

  • Web support now includes image generation and search

  • Privacy and voice settings allow users to control what gets stored or shared


Source: Meta
Source: Meta

It’s still early days for Meta’s standalone AI assistant, and the app reflects that. Some features are experimental, and it’s clear there’s more to come. But what’s notable is the shift in how Meta is choosing to present its assistant—not just as something that shows up when you need help in an app, but as something you can engage with more intentionally.


This move raises questions about where digital assistants are headed: Will they become more personalized and proactive? Will users be open to interacting with AI outside of task-based prompts? And how will trust, privacy, and usefulness shape adoption? As with most things in AI, the technology is only part of the story—what really matters is how people end up using it, or if they do at all.




Read the full announcement here.

OTHER INTERESTING AI HIGHLIGHTS:


California Signs New AI Agreements for Government Operations

/Megan Myscofski, Statehouse/Politics Reporter, on CapRadio


California Governor Gavin Newsom has announced new agreements with tech companies to incorporate generative AI into state government operations. The tools are already being tested in areas like traffic safety and customer service and aim to boost efficiency and engagement. While the governor touts it as a bold move toward modernizing government, California’s Legislative Analyst’s Office raised concerns over the project’s pace and lack of cost transparency. With plans to launch finalized projects as early as July, lawmakers are urging more oversight.



Read more here.


NotebookLM’s Audio Overview Gets Multilingual Upgrade

/Sabrina Ortiz, Senior Editor, on ZDNET


Google's AI-powered NotebookLM just got a major upgrade: its popular Audio Overview feature now supports more than 50 languages, including Spanish, Arabic, and Hindi. The tool transforms uploaded content into conversational podcast-style summaries with AI hosts — and it performs impressively well in other languages. In testing, the AI-generated Spanish version stayed accurate and natural, proving useful for studying, language learning, and accessibility. It’s another big step in making generative AI more multilingual and globally accessible.



Read more here.

SOME AI TOOLS TO TRY OUT:


  • Tufa – Creates and schedules social media posts tailored to your industry.

  • Botsheets – Turn your Google Sheets into slides or docs instantly.

  • Pikzels – Generates viral YouTube thumbnails from text prompts in 30 seconds.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Copyright Wowza, inc 2025
bottom of page