
Hello, AI Enthusiasts.
In case your brain’s still running after hours…
Researchers just trained an AI on 3,700 hours of science podcasts. Not search results. Not textbooks. Podcasts. The result? A model that gets scientific questions, and the human nuance behind them, a little better.
Meanwhile, AI might help fix one of hiring’s oldest problems: confusing credentials with competence. Smart systems can now make skill-based hiring real.
Oh—and Meta just poached Apple’s AI brain. Word is, more might follow.
Here's another crazy day in AI:
Researchers improve scientific computing through audio learning
The case for AI-supported skills-first hiring
Another Apple exec jumps to Meta
Some AI tools to try out
TODAY'S FEATURED ITEM: Science in Everyday Dialogue

Image Credit: Wowza (created with Ideogram)
What if the conversations we listen to during our daily commutes could help build smarter scientific assistants?
Researchers at Boston University have developed PodGPT, a computer program that learns from over 3,700 hours of science and medicine podcasts to become significantly better at understanding and answering scientific questions. Published in npj Biomedical Innovations, this study by Dr. Vijaya B. Kolachalama and his team explores how incorporating real expert conversations can enhance how artificial intelligence systems process and respond to complex scientific topics.
Most large language models today are trained on written material such as textbooks, research papers, and websites. PodGPT takes a different route. By learning from how scientists and medical professionals actually talk about their work in interviews, lectures, and public discussions, the model absorbs a different kind of knowledge—not just technical content, but how ideas are explained, questioned, and connected in real time. The team then paired this conversational training with a retrieval system that links the model’s answers to verified scientific publications. The result is a tool that not only performs well on standard benchmarks but also shows signs of understanding scientific language in a more natural and flexible way.
Research Outcomes
Audio-based learning: The system trains on genuine expert conversations, learning how scientists and medical professionals naturally discuss and explain complex topics
Large-scale dataset: Over 3,700 hours of publicly available science and medicine podcasts were processed, generating more than 42 million text tokens for training
Performance improvements: Testing showed average gains of 1.82 percentage points over standard benchmarks, increasing to 2.43 points when integrated with scientific literature
Language versatility: The model demonstrated 1.18 percentage point improvement in handling questions across different languages without dedicated multilingual training
Healthcare applications: Potential uses include supporting research and education in areas like Alzheimer's disease, cardiovascular health, cancer, and mental health
Educational potential: The approach could apply to other audio content including academic lectures, conference talks, and educational interviews
Literature connectivity: Uses retrieval-augmented generation to link responses with current peer-reviewed research from medical and scientific journals
The distinction between how experts write and how they speak reveals something important about knowledge transfer. When scientists publish papers, they follow strict conventions that prioritize precision and peer review standards. However, when the same experts discuss their work in podcasts or give presentations, they often employ different communication strategies—they provide background context, use analogies to clarify difficult concepts, and explain the reasoning behind their approaches. This conversational knowledge appears to contain valuable educational information that formal written sources might not capture as effectively. The Boston University team found that incorporating this audio-derived understanding helped their model perform better across various testing scenarios, suggesting that the informal explanations common in expert discussions carry meaningful pedagogical value.
The research also highlights interesting questions about how we might better utilize the wealth of expert knowledge that exists in audio formats. While this study focused on publicly available podcasts, the improved performance across different languages suggests that conversational patterns learned from English-language content might help systems understand scientific concepts more broadly. As scientific fields become increasingly specialized and collaborative, tools that can process and communicate knowledge in more intuitive ways may prove valuable for researchers, educators, and students navigating complex topics. The challenge moving forward will be determining how to effectively scale these conversational learning approaches while maintaining the accuracy and reliability that scientific applications demand. The work demonstrates that audio content contains untapped educational value, but questions remain about implementation, quality assurance, and broader applications as this technology continues to develop.
Read the full article here.
Read the research here.
OTHER INTERESTING AI HIGHLIGHTS:
The Case for AI-Supported Skills-First Hiring
/Papia Debroy, (Nonresident Senior Fellow - Brookings Metro), and Byron Auguste, (CEO and Co-founder - Opportunity@Work), on Brookings
AI is shaking up hiring practices—but it could also help fix them. In this piece, Debroy and Auguste argue that generative AI offers a chance to replace outdated degree requirements with real-time assessments of actual skills. With millions of skilled workers shut out of higher-wage jobs by the so-called “paper ceiling,” AI could become a tool for equity and inclusion—if used with intention. The authors call on employers to rewire hiring systems toward opportunity by focusing on what workers can do, not what credentials they hold.
Read more here.
Another Apple Exec Jumps to Meta
/Mark Gurman, (Managing Editor), on Bloomberg
Meta just poached one of Apple’s most important AI leaders—and it may not stop there. Ruoming Pang, who led Apple’s foundational models team, is joining Meta’s new superintelligence unit as part of a wave of high-profile hires across the AI sector. Meta reportedly offered Pang a compensation package worth tens of millions, as it continues its aggressive push to dominate the AI talent war. Apple’s internal AI strategy now faces more pressure, as multiple engineers consider following Pang out the door.
Read more here.
SOME AI TOOLS TO TRY OUT:
That’s a wrap on today’s Almost Daily craziness.
Catch us almost every day—almost! 😉
EXCITING NEWS:
The Another Crazy Day in AI newsletter is on LinkedIn!!!

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.