top of page

Another Crazy Day in AI: Cutting Through Silicon Valley Noise

Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Halfway through the week is a good time to separate the signal from the noise.


In the latest How to Fix the Internet podcast, Princeton’s Arvind Narayanan takes on “AI snake oil” with a mix of optimism and skepticism. He argues that AI can be transformative, but only if we build it into strong systems with guardrails, rather than chasing flashy headlines.


Meanwhile, Google is investing $9 billion in Oklahoma to expand AI infrastructure and fund free training programs for local students.


Indiana University is also jumping in, launching GenAI 101 this month to boost AI literacy with hands-on lessons and real-time help from an AI tutor.


Here's another crazy day in AI:

  • A clearer picture of AI beyond the hype and fear

  • Google backs Oklahoma AI education and infrastructure

  • Indiana U debuts AI literacy program for students and faculty

  • Some AI tools to try out


TODAY'S FEATURED ITEM: The Middle Path Through Tech Panic

A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


Is artificial intelligence destined to replace human workers, or are we fundamentally misunderstanding what AI can and cannot do?



In Separating AI Hope from AI Hype, an episode of the Electronic Frontier Foundation’s How to Fix the Internet podcast, Princeton computer science professor Arvind Narayanan joins hosts Cindy Cohn and Jason Kelley to take a clear-eyed look at what AI can realistically do and where the hype often gets ahead of the facts. Narayanan, co-author of the AI Snake Oil newsletter and book, describes himself as a “techno-optimist”—but one who believes AI will only be genuinely useful if it is guided by strong guardrails and integrated into well-functioning systems. The conversation spans education, hiring, criminal justice, misinformation, and the idea that AI’s most profound role may be in areas we barely notice.



Here’s what the discussion covers:

  • Human work often involves interpreting vague instructions, applying common sense, and navigating messy real-world situations that resist computational solutions

  • Predictive algorithms in criminal justice and hiring frequently just identify who has been arrested or hired before, rather than accurately forecasting future behavior

  • Crude "cheapfakes" often work better for political purposes than sophisticated deepfakes because they reinforce existing beliefs rather than trying to convince skeptics

  • Testing AI systems properly requires years-long controlled studies similar to medical trials, but organizations rarely invest in this kind of evaluation

  • Educational tools show real potential when AI helps teachers create customized learning activities for individual students struggling with specific concepts

  • Future work may center more on supervising AI systems than being replaced by them, similar to how industrial automation created supervisory roles

  • Adding AI to dysfunctional organizations or processes rarely fixes underlying problems and may amplify existing issues

  • Widespread adoption will likely unfold over decades as institutions figure out how to reorganize around new capabilities



Narayanan’s insights draw from both personal experience and years of research. He shares how early access to technology shaped his education, underscoring AI’s potential to expand opportunities when applied thoughtfully. In classrooms, this could mean teachers using AI to create tailored tools that address specific learning needs. But he also stresses that technology alone cannot resolve deep-rooted challenges like underfunded schools or outdated practices—without these being addressed, AI’s role will remain limited.


The conversation also looks beyond the common extremes of AI discourse. Rather than predicting a sudden takeover or revolutionary transformation, Narayanan envisions AI gradually embedding itself into everyday processes, much like other technologies we now take for granted. In this future, its influence would be significant but often invisible, helping to improve tasks without replacing the human judgment that guides them.


This perspective encourages a more measured approach—one that identifies where AI can genuinely make a difference, demands evidence for its effectiveness, and keeps people involved in overseeing its use. By moving away from broad, sweeping claims, the discussion offers a clearer understanding of AI’s practical place in society, reminding us that its value will depend less on what it promises and more on how we choose to apply it.



Read the full article here.

Watch it on YouTube here.

Listen on Apple Podcasts here.

Listen on Spotify here.

OTHER INTERESTING AI HIGHLIGHTS:


Google Backs Oklahoma AI Education and Infrastructure

/Company Announcements, on Google Blogs – The Keyword


Google is committing $9 billion over the next two years to expand cloud and AI infrastructure in Oklahoma, including a new data center campus in Stillwater and an expansion of its Pryor facility. The investment also funds workforce development programs, such as the Google AI for Education Accelerator, which offers no-cost AI training and Google Career Certificates to students at the University of Oklahoma and Oklahoma State University. Additional funding will boost the electrical workforce pipeline by 135% to meet new energy infrastructure demands. Google says these efforts will help prepare Oklahoma’s students and workers to lead in America’s AI future.



Read more here.


Indiana U Debuts AI Literacy Program for Students and Faculty

/Ashley Mowreader, (Student Success Reporter), on Inside Higher Ed


Indiana University is launching GenAI 101, a free, self-paced online course designed to teach students, faculty, and staff the basics of generative AI. Covering topics from prompt engineering and ethical AI use to data storytelling and fact-checking, the eight-module course takes four to five hours to complete and awards a certificate upon finishing. The course features an AI teaching character and an AI tutor to answer questions in real time. Set to launch on August 25, GenAI 101 will be auto-enrolled for students, with some faculty integrating it directly into their curriculum to help close gaps in AI literacy.



Read more here.

SOME AI TOOLS TO TRY OUT:


  • Embeddable – Build interactive website tools with AI.

  • Cora Computer – Search your inbox to answer any email question.

  • Deskrib – Turn ideas into beautiful documents instantly.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Comments


Subscribe to Another Crazy Day in AI​

Catch us almost every day—almost! 😉

Thanks for signing up!

Copyright Wowza, inc 2025
bottom of page