top of page
Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.



How’s your Monday night going? Ready for what’s ahead this week?


Claude just leveled up. Now, it’s linking up with all your tools, so it’s more than just a chatbot—it’s your new, context-aware work assistant.


And in the finance world, bankers are thriving with AI that finds answers fast—95% faster, to be exact. JPMorgan’s got the receipts.


But that’s not all—AI’s new role? Therapist, planner, and maybe... life coach. Emotional support is now a top use case.

If this is how the week starts, imagine Friday. Stay curious.


Here's another crazy day in AI:

  • Claude now works with your tools

  • JPMorgan credits AI for sales surge during market swings

  • How people use AI in 2025, ranked by popularity

  • Some AI tools to try out


TODAY'S FEATURED ITEM: Claude Joins Your Workflow


A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


Have you ever wondered what would happen if your AI assistant could directly access and work with all your favorite apps and tools?


Anthropic recently introduced Integrations, a new feature that allows Claude to connect with the tools you already use. Along with this, they’ve expanded Claude’s Advanced Research capabilities, making it easier for the AI to gather information not just from the web, but also from your connected apps and documents. Announced last Friday, this update reflects Anthropic’s continued focus on making Claude more context-aware, better equipped to assist with real-world tasks rather than just providing answers.


With Integrations, Claude is able to understand more about your work by connecting with platforms like Atlassian’s Jira and Confluence, Zapier, Intercom, Asana, PayPal, and more. This means that Claude isn’t just responding to questions—it’s acting as a more capable collaborator, helping you organize information, manage tasks, and even take action across different platforms.



Here’s how the update works:

  • Claude now integrates with 10+ popular services, including Jira, Asana, Zapier, and Intercom

  • Developers can create custom integrations using the open Model Context Protocol (MCP)

  • Claude can help manage tasks, track progress, and generate work items across platforms

  • Zapier integration provides access to thousands of apps, automating processes and workflows

  • With Jira and Confluence, Claude can assist in planning, documentation, and task management

  • Advanced Research mode helps Claude conduct deeper investigations and provide comprehensive reports

  • Reports come with citations for full transparency

  • These features are in beta for Max, Team, and Enterprise plans; web search is now globally available for all paid plans



The real value of this update lies in how it changes the way we work with AI. Claude’s new capabilities make it less of a passive assistant and more of an active partner in managing your daily tasks. Instead of switching between multiple platforms or manually gathering information, Claude now serves as a bridge between your apps, bringing everything together in one place.


This integration of AI with your existing tools makes Claude a much more practical and intuitive collaborator. The ability to streamline workflows and automate tasks is becoming increasingly important, and updates like these bring AI closer to being a truly seamless part of how we work. For anyone using these tools already, the update promises to improve productivity and make everyday tasks a bit easier to manage.




Read the full article here.

OTHER INTERESTING AI HIGHLIGHTS:


JPMorgan Credits AI for Sales Surge During Market Swings

/Nupur Anand, (U.S. Banking Correspondent in NY), on Reuters


JPMorgan Chase’s AI tools enabled its bankers to swiftly respond to client needs during a turbulent market period, especially in April’s record-setting volatility. AI helped financial advisers quickly access client-specific data and deliver personalized investment advice, increasing both client satisfaction and productivity. The bank’s “Coach AI” tool reportedly helped reduce time spent searching for information by 95%, contributing to a 20% jump in gross sales from 2023 to 2024. JPMorgan plans to grow its client base by 50% in 3–5 years using AI, and estimates $1.5 billion in savings already, with more gains expected.



Read more here.


How People Use AI in 2025, Ranked by Popularity

/Pallavi Rao on Visual Capitalist [Graphics/Design by Miranda Smith]


AI has evolved into a life companion in 2025, with its most common uses revolving around emotional and personal support—like therapy, organizing life, and helping people find purpose. These top use cases reveal a shift from last year’s focus on idea generation and search, highlighting AI’s role in addressing modern emotional needs. Learning and creativity remain strong categories, but AI search has declined, possibly due to its integration with platforms like Google. While experts praise AI for supporting mindfulness and CBT, they also caution against relying on it at the expense of real human connections.



Read more here.

Graphics/Design by Miranda Smith | Source: Harvard Business Review, Marc Zao-Sanders
Graphics/Design by Miranda Smith | Source: Harvard Business Review, Marc Zao-Sanders

SOME AI TOOLS TO TRY OUT:


  • Stackie – Log, track, recall, and reflect effortlessly with AI.

  • Raycast – Sync AI chats, notes, and snippets across Apple devices.

  • Cerebro – Make videos, articles, and books searchable and actionable.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.



Another week down, another excuse to celebrate. But before you toast to the weekend, here’s a peek at something refreshingly practical: Google wants to make language learning less robotic. With three new AI experiments powered by Gemini, users can now get help speaking like real people, responding in the moment, and even learning from the world around them.


Meanwhile, a landmark lawsuit against Meta is heating up. Turns out some pirated books may have fueled Llama’s brainpower—and a group of authors isn’t thrilled about it.


And yes, AI just got into business school. One professor ran OpenAI’s latest model through a Harvard case study—and it nearly beat the official answer key.


Power down your inbox and power up your weekend.


Here's another crazy day in AI:

  • Tools for real-world language learning

  • Mays Business School launches AI partnership with Perplexity

  • o3 nails Harvard case in one go from pdf

  • Some AI tools to try out


TODAY'S FEATURED ITEM: Learning by Doing and Seeing


A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


Ever been caught in a foreign country needing the perfect phrase — and realizing you never learned it?


In a bid to make language learning more useful in real-life situations, Google has launched three early-stage AI experiments designed to help people learn languages in a more dynamic and personalized way. TechCrunch's consumer news reporter, Aisha Malik, dives into how Google's Gemini model powers tools that go beyond textbook grammar and formal speech, and instead help you speak like a local, respond in the moment, and even learn from what you see around you.


Image Credits: Google
Image Credits: Google

Here’s a look at the new experiments and what they aim to do:

  • Tiny Lessons gives you quick, context-specific vocabulary and grammar tips. You describe a situation (like “finding a lost passport”), and the tool responds with relevant phrases and possible replies.

  • Slang Hang helps you pick up casual, everyday language by simulating conversations between native speakers. It’s designed to show how real conversations unfold — complete with slang and informal tone.

  • Word Cam lets you snap a photo of your surroundings to learn vocabulary for the objects in front of you. It offers additional words too, encouraging deeper observation and more immersive learning.


Image Credits: Google
Image Credits: Google

These tools are accessible via Google Labs and currently support a wide range of languages, including Arabic, Hindi, Japanese, French, Portuguese, and many others.


What makes these experiments worth paying attention to is the way they attempt to meet learners where they actually are—in unfamiliar places, unexpected conversations, or simply curious moments during the day. Instead of relying solely on memorized vocabulary or structured lessons, they offer ways to engage with language as it naturally shows up in everyday life. Whether you’re trying to report a lost passport or just want to understand what a street vendor said, tools like these are built for those real, often messy moments of communication.


Image Credits: Google
Image Credits: Google

There’s still plenty to improve, especially when it comes to accuracy and nuance. But the idea that language tools could be more responsive, visual, and grounded in real situations is a meaningful development for independent learners. These experiments might not replace traditional study, but they could help bridge the gap between classroom learning and the real-world confidence it takes to speak up when it matters most.




Read the full article here.

Check them out on Google Labs.

OTHER INTERESTING AI HIGHLIGHTS:


Meta Faces Key Copyright Test Over AI Training

/Cristina Criddle and Hannah Murphy on Financial Times


Meta is heading to court in one of the first major legal tests of whether tech companies can use copyrighted materials to train AI. A group of U.S. authors, including Ta-Nehisi Coates, claim Meta scraped content from LibGen, a shadow library hosting pirated books, to train its Llama models. Internal emails revealed during discovery suggest Meta staff knowingly entered a legal grey area and took steps to obscure their data sources. The case could set a precedent for how copyright law applies to AI and whether “fair use” extends to training generative models.



Read more here.


o3 Nails Harvard Case in One Go from PDF

/Ethan Mollick on LinkedIn


Ethan Mollick tested OpenAI's new o3 model on a Harvard Business School case—and the results were impressive. From a single PDF, the AI generated a full set of financial projections and compared its output to the official case data, coming within 4% of the correct answers. Mollick highlights that while o3 still occasionally hallucinates, its MBA-level reasoning and modeling were remarkably coherent. The comparison revealed minor, explainable discrepancies, showcasing the tool’s potential in academic and analytical work.



Read more here.

Image Credit: Ethan Mollick
Image Credit: Ethan Mollick

SOME AI TOOLS TO TRY OUT:


  • Chikka – AI voice interviewer that chats and extracts actionable insights instantly.

  • Freebeat – Turn music and ideas into viral videos with one click.

  • ContentRadar – Create and repurpose LinkedIn and X content effortlessly using AI.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.



Google’s DeepMind and Research teams just gave their diagnostic chatbot a big brain boost. The new version isn’t just conversational, it’s multimodal. It reads your medical scans and documents while it chats with you—adding visual intelligence to its already strong language processing.


College kids are getting smarter (and not just from caffeine). Texas A&M is rolling with Perplexity to teach AI from the ground up.


And if you’d rather not rely on cloud tools, GPT4ALL lets you run LLMs from your own desktop. That’s enough thinking for one night. See you in your inbox again soon.


Here's another crazy day in AI:

  • A diagnostic chatbot with vision

  • Mays Business School launches AI partnership with Perplexity

  • Use your favorite AI models offline with GPT4ALL

  • Some AI tools to try out


TODAY'S FEATURED ITEM: What AMIE’s Update Means for Healthcare


A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


How might visual AI change the future of remote healthcare access?


Google DeepMind and Google Research have taken a major leap forward with their multimodal AMIE (Articulate Medical Intelligence Explorer), a research diagnostic assistant. This breakthrough, shared by Khaled Saab and Jan Freyberg on the Google Research Blog, introduces a new multimodal version of AMIE—capable not just of having diagnostic conversations but also of interpreting medical images and documents shared by patients. It highlights how conversational AI systems can now incorporate visual medical information during diagnostic discussions, potentially transforming the future of remote healthcare.


This new version of AMIE shows just how far AI has come in the field of healthcare. Not only can AMIE hold conversations and answer questions based on medical information, but it can also interpret and integrate visual data like medical images. This is a huge step forward, especially for remote consultations, where patients can now share images like X-rays or CT scans directly with the system. AMIE is able to assess this information and provide diagnostic assistance alongside the patient's description of symptoms or medical history.


Source: Google
Source: Google

Here are some of the key points discussed:

  • AMIE’s performance in medical conversations was rated higher than that of primary care physicians across 28 of 32 objective and subjective criteria in a blinded study with licensed professionals.

  • The new model was trained on a combination of publicly available data and simulated patient-doctor interactions created by clinicians.

  • AMIE’s reasoning is visible to users—it shows which parts of an image or document it’s referring to in its responses, making its diagnostic process more transparent.

  • It can reference prior parts of the conversation or shared images and documents to support its medical reasoning.

  • While AMIE is not a product and hasn’t been tested in real-world clinical settings, this research helps explore what might be possible for the future of AI-assisted healthcare.


Source: Google
Source: Google

As the boundaries between conversational AI and medical diagnostics continue to blur, this research offers a glimpse into how integrated tools might one day support physicians or even extend basic access in areas where healthcare professionals are scarce. Being able to process both dialogue and visual data means these systems can engage in more context-aware discussions—something that's especially useful when a patient can’t explain everything in words alone.


Of course, this work remains in the research stage. There are essential questions to resolve—around safety, equity, consent, and how these systems would operate alongside human clinicians in the real world. But it’s a meaningful development. It encourages a broader conversation about how multimodal AI might reshape the tools available in healthcare, not by replacing human judgment, but by enhancing how care can be delivered and understood—especially at a distance.




Read the full blog here. Read the paper here.

OTHER INTERESTING AI HIGHLIGHTS:


Mays Business School Launches AI Partnership with Perplexity

/David Swope, News Reporter, on The Battalion


Texas A&M’s Mays Business School has become the first academic institution to partner with Perplexity, granting students access to its advanced AI-powered platform, Enterprise Pro. The goal is to prepare students for a workforce increasingly shaped by artificial intelligence by giving them hands-on experience with cutting-edge tools. The initiative is part of a broader effort, including a new AI and Business minor and student competitions focused on AI applications. While some express concerns about over-reliance on AI, school leaders emphasize ethical use and AI literacy as essential skills for the future.



Read more here.


Use Your Favorite AI Models Offline with GPT4ALL

/Jack Wallen, Contributing Writer, on ZDNET


Want to run your favorite local AI models directly from your desktop? GPT4ALL is a user-friendly app that makes it easy to run open-source models like Llama, Mistral, and Orca on Linux, MacOS, and Windows. With support for model switching, GPU selection, and API serving, GPT4ALL offers a powerful way to use AI privately without relying on the cloud. Installation is quick for Ubuntu-based systems, and users can immediately begin experimenting with various models and workflows — from personal research to coding help.



Read more here.

ree

SOME AI TOOLS TO TRY OUT:


  • Sans Writer – A clean, private space to write without distractions or metrics.

  • HiBird – AI meetings, translations, and messaging to help startups grow faster.

  • Suna by Kortix – A generalist AI agent that takes action on your behalf.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Copyright Wowza, inc 2025
bottom of page