top of page
Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Is your week keeping up with AI? Because it’s moving fast.


Gemini 3, Google’s newest AI brainchild. It’s got reasoning, vision, and conversation all rolled into one… and yes, it might make last year’s models look a little quaint.


Meanwhile, researchers are mapping out how rural K–12 schools can integrate AI responsibly, designing strategies that meet local needs and prepare teachers for the classroom of the future.


And for those curious how AI works in the wild, digitalNow 2025 delivered. Nearly 300 leaders shared examples of moving AI from concept to tangible results.


Who knows what tomorrow’s AI headlines will bring?


Here's another crazy day in AI:

  • Inside Gemini 3 and what it promises

  • Rural K–12 schools get AI integration support

  • Associations move from AI theory to practice

  • Some AI tools to try out

TODAY'S FEATURED ITEM: A Look at Google's Gemini 3

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


How should we evaluate progress when new models focus heavily on reasoning, context and decision-making?


Google just released Gemini 3, their most advanced AI model to date. The announcement came from Sundar Pichai (CEO of Google and Alphabet), Demis Hassabis (CEO of Google DeepMind), and Koray Kavukcuoglu (CTO of Google DeepMind) on The Keyword. This release caps nearly two years of development since the original Gemini launched. The company reports that their Gemini app now reaches over 650 million monthly users, while AI Overviews serves 2 billion people each month. According to Google, Gemini 3 combines advanced reasoning, multimodal understanding, and agentic capabilities in ways previous versions couldn't match.




Key points to note:

  • Gemini 3 is designed to handle multi-step reasoning tasks and more complex problem-solving.

  • Longer inputs are used to test whether the model can maintain context over extended interactions.

  • Evaluations include scenarios with incomplete or ambiguous information to see how it navigates uncertainty.

  • Some assessments focus on the model’s ability to explain the reasoning behind its answers.

  • Collaboration across research teams is helping establish more consistent standards for evaluating advanced capabilities.





The benchmark numbers look impressive, but they're measuring performance in controlled settings. Real-world use is messier—your requests aren't always clear, tasks don't fit neat categories, and you often need help with something the model hasn't been specifically tested on. Google shared examples like translating handwritten family recipes, analyzing sports videos for technique tips, and turning research papers into interactive study guides. They also launched Google Antigravity, where AI agents can plan projects, write code, and check their work independently. The company ran safety evaluations with internal teams and external organizations like the UK AISI and Apollo before release. These applications sound genuinely useful, though launch examples tend to show things at their best rather than their average.


What happens next matters more than what's in the announcement. As people actually start using Gemini 3 for their own projects and problems, we'll see where it delivers and where it doesn't. Google's focusing on better reasoning and contextual understanding, with agents that can handle complete workflows instead of just answering individual questions. That could make a real difference in how we interact with AI, or it might turn out that simpler, more predictable tools work better for most tasks. The gap between a polished demo and something you'd trust to handle important work on its own is often wider than it looks at launch. Time and regular use will show whether Gemini 3's approach actually solves problems people have been struggling with.




Read the full article here.

OTHER INTERESTING AI HIGHLIGHTS:


Rural K–12 Schools Get AI Integration Support

/News Staff, on Government Technology


Washington State University (WSU) researchers are developing an AI integration road map for rural K–12 schools, supported by an $82,500 grant from Microsoft. Assistant professors Tingting Li and Peng He will lead the Rural AI for Societal Equity (RAISE) project, working with educators and technology developers to design strategies grounded in local needs. The six-month initiative will study teacher-AI interactions, conduct workshops, and gather insights from administrators to close gaps in AI guidance for rural districts. The goal is to create a model that other states can adopt to responsibly implement AI in education.



Read more here.


Associations Move from AI Theory to Practice

/Hosts Amith Nagarajan and Mallory Mejias, Sidecar Sync Podcast


A recent episode of Sidecar Sync captures key insights from digitalNow 2025 in Chicago, where nearly 300 association leaders shared how AI is moving from concept to real-world application. Hosts Mallory Mejias and Amith Nagarajan explore practical examples—from AI chatbots reducing support calls to strategy frameworks like the St. Louis Arch that align boards and staff. The discussion emphasizes staff education, generational differences, bold experimentation, and youth perspectives, highlighting how associations are maturing in their AI adoption to improve operations and member services.



Check it out here.

SOME AI TOOLS TO TRY OUT:


  • Willow – Dictate emails, messages, and documents faster than typing with AI voice dictation.

  • Instories – All-in-one content creation tool to create, edit, and generate images and videos.

  • MyLens – Turn any YouTube video into a clickable AI timeline of key moments.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Week done? Great. Here’s something to leave you thinking.


A recent podcast episode took a surprisingly cool turn into the world of digital scent—how it’s captured, modeled, and maybe one day streamed like data. If you’ve never thought about what “smell tech” looks like, this one’s worth the listen.


Penn GSE is also shaking things up with a new hands-on program that lets students audit real AI systems to spot bias.


And Google’s newest shopping updates make holiday browsing much easier. You can search conversationally and get real-time product info without the usual headache.


Now go enjoy your weekend. You’ve earned a little mental wandering.


Here's another crazy day in AI:

  • How one company "teleported" a plum's smell

  • Penn GSE launches classroom-ready AI auditing program

  • Google rolls out AI updates for seasonal shopping

  • Some AI tools to try out

TODAY'S FEATURED ITEM: Building the World's First Digital Nose

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


What if your next text message didn't just show a photo but delivered the actual smell behind it?


In a recent episode of The Neuron Podcast, host Pete Huang leads a conversation on emerging sensory technology with Osmo Founder and CEO Alex Wiltschko. Together, they discuss the methods Osmo is using to digitize smell, the scientific reasoning behind the work, and the potential applications that may develop as the technology continues to advance. The episode walks through how scent is captured, modeled, and recreated, offering a closer look at a field that rarely receives attention but touches many aspects of everyday life.



What the conversation explores:

  • Osmo uses a read→map→write approach to digitize scent, similar to processes used for audio or visual signals.

  • Humans rely on over 300 olfactory receptors, making smell far more complex than color perception.

  • The team successfully recreated the scent of a fresh-cut plum in another room using sensors and a molecular printer.

  • Osmo Studio reduces fragrance development timelines from more than a year to roughly one week.

  • Three newly created fragrance molecules—previously nonexistent in nature—have been developed.

  • Smell is closely linked to areas of the brain that govern memory and emotion.

  • Potential applications include early disease detection using scent-based chemical markers.

  • Long-term goals include creating smaller, portable sensors that could be integrated into everyday devices.

  • One project generated a museum’s signature scent from a single photograph.

  • The episode discusses how progress in emerging technologies often accelerates as data and tools improve over time.



Alex has spent about 20 years working on this problem, including time at Google Brain before founding Osmo three years ago. Right now, the technology finds its main use in the fragrance industry, where it genuinely speeds things up. Traditional perfume development takes well over a year, but being able to describe what you want and get samples within days changes how that works. The conversation also gets into other potential uses like creating scents for museums, spotting counterfeit products through molecular analysis, and the bigger ambition of using smell to detect diseases early.


The episode is worth checking out if you're curious about how technology tackles problems that seemed nearly impossible until recently. Smell is one of those things we experience constantly but rarely think about in technical terms. Whether digital scent becomes something we all encounter or stays within specialized fields probably depends on cost, accuracy, and whether enough practical reasons emerge for people to use it. Either way, it's interesting to hear how the science actually works without all the hype that usually surrounds emerging tech.




Watch on YouTube here.

Listen on Spotify here.

OTHER INTERESTING AI HIGHLIGHTS:


Penn GSE Launches Classroom-Ready AI Auditing Program

/Nora Garg, on The Daily Pennsylvanian


Penn’s Graduate School of Education has introduced a high school curriculum focused on helping students recognize and question bias in AI systems. The program, “AI Auditing for High School,” teaches algorithmic bias through hands-on audits and real-world examples, even for students with no coding background. Developed by professors Yasmin Kafai and Danaë Metaxa alongside U.S. educators, it aims to build critical thinking around how AI works, who it benefits, and where it falls short. The launch comes as Penn GSE expands its AI training initiatives, supported by major grants and growing partnerships with school districts.



Read more here.


Google Rolls Out AI Updates for Seasonal Shopping

/Vidhya Srinivasan, VP/GM Ads and Commerce, on Google Blogs — The Keyword


Google is rolling out a major AI shopping update across Search and Gemini to simplify holiday shopping. Users can now describe what they want conversationally in AI Mode and get organized results with visuals, reviews, prices, and inventory data. Gemini also supports shopping directly in-app, helping people compare products, explore ideas, and find real-time listings powered by the Shopping Graph. New agentic features, like Google calling local stores on your behalf and automated budget-friendly purchasing, aim to save time while making shopping more efficient.



Read more here.

Source: Google
Source: Google

SOME AI TOOLS TO TRY OUT:


  • GC AI – AI for in-house teams that drafts, reviews, and analyzes legal documents fast.

  • Marble – Generates persistent, high-fidelity 3D worlds from images, video, or text.

  • Sendr – AI-powered sales outreach that personalizes messages and scales workflow.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


How’s the week unfolding on your side? The AI world’s busy connecting the dots.


The AI industry’s talking less about chips and more about deployment. “Hybrid AI” is the new buzz, mixing private systems with public cloud to create scalable, secure, and surprisingly flexible solutions.


Meanwhile, Gamma just turned its slide-making magic into a $2.1 billion success story.


And Higgsfield is giving creative teams a new space to co-build, edit, and ship... all without leaving the app.


Makes you wonder what the next few days will bring.


Here's another crazy day in AI:

  • Why Hybrid AI is gaining traction with companies

  • Gamma hits $2.1B valuation as AI slides surge in popularity

  • Higgsfield rolls out shared AI hub for teams and studios

  • Some AI tools to try out

TODAY'S FEATURED ITEM: Hybrid AI Is Reshaping Enterprise Strategy

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


How can companies harness AI’s full power without giving up control of their own data?


While much of the conversation around artificial intelligence focuses on chip manufacturers and their latest hardware, something interesting is happening with how companies actually deploy AI. TECHnalysis Research president Bob O'Donnell recently spoke with Yahoo Finance's Market Catalysts about what he calls “hybrid AI”—essentially, how major enterprises are combining public cloud services with their own private infrastructure. With AMD's Analyst Day at Nasdaq and SoftBank's sale of its Nvidia stake setting the financial backdrop, O'Donnell discusses how companies like Coca-Cola and GM are building private AI “factories” and what that could mean for the next phase of innovation, infrastructure, and investment.


Some of the main insights from the discussion include:

  • SoftBank’s sale of its Nvidia shares reflects a portfolio reallocation, with capital moving toward ventures like ARM and OpenAI instead of signaling lost confidence.

  • Hybrid AI builds on the concept of hybrid cloud computing, allowing workloads to be divided between public platforms and private data centers for flexibility and security.

  • Large enterprises are developing in-house AI systems to handle sensitive data and proprietary processes more securely.

  • AMD is positioning itself as a strong competitor to Nvidia, expanding across CPUs, GPUs, and FPGAs to support growing AI workloads.

  • AI applications are moving closer to the edge, enabling automation through robotics, sensors, and industrial tools.

  • Experts anticipate that tangible returns and wider adoption of AI technologies may become more evident by 2026–2027.

  • Integrating AI into organizations remains a human challenge as much as a technical one, requiring changes in workflow and management.




O'Donnell talks about something that doesn't always get much attention in AI coverage: the practical choices companies face when they're deciding where their AI should actually run. When large companies like Coca-Cola or GM invest in their own infrastructure, they're thinking about things like data security, regulatory compliance, long-term costs, and keeping control over systems that might be central to their business. The hybrid approach gives them flexibility—they can keep sensitive information on their own servers while still using public cloud services for other tasks that don't require the same level of security.


O'Donnell acknowledges that companies are spending heavily on AI infrastructure right now, but many are still figuring out what works best for them. The returns will probably come in stages over several years as organizations learn through experience. His point about the human side being just as tough as the technical side seems particularly relevant—having sophisticated AI systems available doesn't automatically mean people will know how to use them effectively or that organizations will successfully integrate them into daily operations. For anyone following how AI is actually being implemented in businesses rather than just developed in labs, these kinds of details provide useful perspective. It's less about dramatic breakthroughs and more about the steady work of figuring out where systems should run, how to manage them, and how to make them genuinely useful for specific business problems.




Check it out here.

OTHER INTERESTING AI HIGHLIGHTS:


Gamma Hits $2.1B Valuation as AI slides Surge in Popularity

/Julie Bort, Startups & Venture Desk Editor, on TechCrunch


AI presentation startup Gamma has reached a $2.1 billion valuation after raising $68 million in its Series B led by Andreessen Horowitz. Co-founder and CEO Grant Lee shared that the company has hit $100 million in annual recurring revenue with 70 million users. Known for its AI-generated presentations, websites, and social content, Gamma has achieved profitability with a small team of about 50 employees. The new funding round also includes a $20 million secondary offering for early employees, underscoring investor confidence in Gamma’s steady, profit-driven growth.



Read more here.


Higgsfield Rolls Out Shared AI Hub for Teams and Studios

/Rus Syzdykov, Head of Prompt Engineering, on HiggsfieldAI Blogs


HiggsfieldAI has launched its new Team and Enterprise Plans, offering shared workspaces for creative collaboration and production. The update allows teams to co-create, organize, and deliver projects within one unified platform, complete with real-time collaboration, analytics dashboards, and role-based access. Designed for agencies, brands, studios, and educators, the platform streamlines AI-powered workflows from concept to delivery. With shared assets, seamless switching between personal and team modes, and enterprise-grade customization, Higgsfield’s new plans aim to redefine how teams produce at scale.



Read more here.

SOME AI TOOLS TO TRY OUT:


  • Papiers – New interface for arXiv papers with mindmaps, related works, and discussions.

  • Jinna.ai – Searches across all your content in 100+ languages, fast and smart.

  • Gamma – Instantly turns your ideas into polished presentations or websites.

That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Copyright Wowza, inc 2025
bottom of page