top of page
Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Welcome to another week! If you thought AI would take a break over the weekend, think again—it’s been busy...


Ethan Mollick shares a study revealing generative AI often surpasses human creativity in marketing tasks. The upcoming paper compares AI-generated visuals to human designs. Who comes out on top?


In healthcare news, an AI-assisted study found that AI-generated diagnoses were rated equal or better than those from physicians. Who knew AI could play doctor too?


And Google’s AI Mode now supports image queries—time to snap and ask your way to knowledge!


Get ready for a rollercoaster week in AI.


Here's another crazy day in AI:

  • Can a prompt outperform a pro?

  • Can AI make better first calls in urgent care?

  • Google AI Mode now answers image-based questions

  • Some AI tools to try out


TODAY'S FEATURED ITEM: When AI Creates Better Ads


A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


Is your next best-performing ad just a prompt away?


That’s the idea behind The Power of Generative Marketing, a forthcoming academic study by Jochen Hartmann, Yannick Exner, and Samuel Domdey, soon to appear in the International Journal of Research in Marketing. The paper explores how generative AI compares with professional human creatives when it comes to producing marketing visuals—and whether it can, in some cases, do even better.


The research team analyzed thousands of images generated using tools like DALL·E 3, Midjourney v6, and Adobe Firefly 2. These AI visuals were tested against professional stock photography and content designed by freelancers. Using controlled experiments, perception studies, and a real-world ad campaign, the researchers measured both the creative quality and the business results.


Wharton professor Ethan Mollick shared the study in a recent post, calling attention to how well generative AI performed. He noted that these tools aren’t just closing the gap with human creators—they’re starting to outperform them in specific tasks. “Generative tools are already producing superhuman performance for many marketing tasks,” he wrote. What’s striking, Mollick added, is that these aren’t projections. This is already happening.


Hartmann, Jochen and Exner, Yannick and Domdey, Samuel, The power of generative marketing: Can generative AI create superhuman visual marketing content? (September 05, 2024). International Journal of Research in Marketing, Forthcoming., Available at SSRN: https://ssrn.com/abstract=4597899 or http://dx.doi.org/10.2139/ssrn.4597899
Hartmann, Jochen and Exner, Yannick and Domdey, Samuel, The power of generative marketing: Can generative AI create superhuman visual marketing content? (September 05, 2024). International Journal of Research in Marketing, Forthcoming., Available at SSRN: https://ssrn.com/abstract=4597899 or http://dx.doi.org/10.2139/ssrn.4597899

What the study revealed about AI-generated marketing content:

  • AI visuals scored higher in visual quality, appeal, and how well they fit the creative brief.

  • In a live campaign involving over 173,000 participants, an image made with DALL·E 3 received up to 50% more clicks than the professional-designed version.

  • The cost of generating the top-performing image was just $0.04.

  • Even without knowing which was AI-generated, human judges consistently preferred the synthetic visuals.

  • The team also released GenImageNet, a new dataset to support further research on generative marketing.


Hartmann, Jochen and Exner, Yannick and Domdey, Samuel, The power of generative marketing: Can generative AI create superhuman visual marketing content? (September 05, 2024). International Journal of Research in Marketing, Forthcoming., Available at SSRN: https://ssrn.com/abstract=4597899 or http://dx.doi.org/10.2139/ssrn.4597899
Hartmann, Jochen and Exner, Yannick and Domdey, Samuel, The power of generative marketing: Can generative AI create superhuman visual marketing content? (September 05, 2024). International Journal of Research in Marketing, Forthcoming., Available at SSRN: https://ssrn.com/abstract=4597899 or http://dx.doi.org/10.2139/ssrn.4597899

The findings paint a clearer picture of where we are with AI in creative work. In marketing, visuals need to do more than look good—they have to communicate quickly, align with brand identity, and drive results. This study suggests that generative tools are starting to meet, and sometimes exceed, those expectations in controlled settings and real-world campaigns.


Of course, these results don’t mean AI is “replacing” human creativity. What they do suggest is that the creative process itself is evolving. Tools like DALL·E and Midjourney are giving marketers new ways to test, iterate, and experiment—often in a matter of minutes and at a fraction of the cost. For teams stretched thin or looking to scale content quickly, this opens up new possibilities. Still, strategy, storytelling, and brand context remain essential, and human oversight is what gives these tools their impact.


This research is one of the clearest signals yet that generative AI is becoming part of how creative work gets done—not as a gimmick, but as a tool that performs under pressure. As more studies like this emerge, the conversation shifts from if we use AI in creative tasks to how we integrate it meaningfully.




Read Mollick's LinkedIn post here.

Read and download the full paper here.

OTHER INTERESTING AI HIGHLIGHTS:


Can AI Make Better First Calls in Urgent Care?

/Dan Zeltzer, Zehavi Kugler, Lior Hayat, Tamar Brufman, et al. published on Annals of Internal Medicine


In a study of AI-assisted virtual urgent care visits, researchers compared AI-generated diagnoses and treatment recommendations with those of physicians. Surprisingly, AI recommendations were rated of equal or better quality in most cases—especially when it came to identifying critical red flags and adhering to clinical guidelines. While doctors excelled at adapting to evolving information during a consultation, AI held its own in initial decision-making. The results suggest AI could become a powerful decision-support partner in virtual care settings.




Read more here.


Google AI Mode Now Answers Image-Based Questions

/Barry Schwartz on Search Engine Land


Google is rolling out an upgraded AI Mode that now supports multimodal inputs—you can ask questions using images from your camera or uploads. Drawing on its experience with visual search and Google Lens, AI Mode interprets scenes, identifies objects, and understands context to generate richly informed responses. This update expands access to millions more users and hints at a future where search is no longer just typed text, but interactive and visual. Try snapping a photo and asking away.



Read more here.

Source: Google
Source: Google

SOME AI TOOLS TO TRY OUT:


  • Clockwise – Smart scheduling and calendar automation with AI.

  • Experiments – Turn personal challenges into loggable, trackable experiments.

  • Enconvo – Build custom workflows that connect all your favorite tools.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Before you switch to weekend mode, here’s a quick AI rundown:


DeepMind is getting serious about AGI safety. In a new paper, their top researchers explore the future of human-level AI—and how we might avoid its biggest pitfalls. Think: solving diseases vs. losing control over machines. Heavy stuff, but important reads.


Meanwhile, AI agents could soon be weaponized—cybersecurity experts are already tracking their early moves.


In a creative twist, Stanford brought filmmakers and AI researchers together to explore how narratives influence our understanding of artificial intelligence.


Now, kick back and relax—you’ve earned it!


Here's another crazy day in AI:

  • The blueprint for responsible AGI

  • Is the future of hacking agentic too?

  • Stanford workshop on AI and storytelling

  • Some AI tools to try out


TODAY'S FEATURED ITEM: Google DeepMind’s AGI Plan


A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


What happens when general-purpose AI reaches a level of capability that rivals—or even surpasses—our own?


In a new paper An Approach to Technical AGI Safety and Security, a team from Google DeepMind—including Anca Dragan, Rohin Shah, Four Flynn, and Shane Legg—shares their latest thinking on how to responsibly navigate the development of artificial general intelligence (AGI), a kind of AI that could match or exceed human cognitive abilities across most tasks. This piece outlines the risks AGI presents and the robust systems DeepMind is building to manage those risks.


The authors explore the balance between optimism for AGI’s benefits and deep caution about its potential harms. From improved diagnostics and personalized learning to cybersecurity and misalignment with human values, they break down where things could go wrong—and what they’re doing to make sure they don’t.


Source: Google Deepmind
Source: Google Deepmind

A few things they’re working on:

  • Mapping out four central risk areas: misuse, misalignment, accidents, and broader societal impacts

  • Building technical safeguards like access restrictions and scenario simulations

  • Refining training methods that include human feedback and uncertainty-aware behavior

  • Investing in interpretability tools (such as MONA) to understand how AI systems reach their decisions

  • Conducting regular safety evaluations and inviting independent input

  • Stress-testing systems early and often to adapt as the technology evolves


Source: Google Deepmind
Source: Google Deepmind

The paper doesn’t offer a silver bullet—and it doesn’t try to. Instead, it lays out a living framework that’s meant to grow and shift alongside the development of AGI itself. The goal is not just to anticipate what might go wrong, but to put systems in place that are capable of responding when the unexpected does happen.


For those tracking how AGI is being shaped behind the scenes, this paper provides a window into how one of the leading research labs is thinking about responsibility at scale. It doesn’t shy away from the complexity of the challenge, nor does it overstate what’s been solved. It simply asks: how do we build with care, when the stakes are this high?




Read the full article here.

Read the full paper here.

OTHER INTERESTING AI HIGHLIGHTS:


Is the Future of Hacking Agentic Too?

/Rhiannon Williams on MIT Technology Review


AI agents are becoming smarter and more autonomous—and cybersecurity experts warn that they might soon be used to conduct cyberattacks at scale. Unlike basic bots, agents can adapt, plan, and execute attacks with alarming efficiency, posing a serious threat to digital infrastructure. A project called LLM Agent Honeypot is already working to track these AI-driven intrusions in real time. Researchers say it's only a matter of time before cybercriminals start relying on agents for hacking—and we need to be ready before it happens.



Read more here.


Stanford Workshop on AI and Storytelling

/Dylan Walsh on Stanford HAI News


What happens when filmmakers and AI researchers work together to tell stories about artificial intelligence? A workshop at Stanford’s HAI brought both groups together to explore how narratives shape public understanding—and policy—around AI. Participants like filmmaker Sophie Barthes and researcher John Thickstun shared how blending academic ideas with storytelling revealed just how hard (and important) it is to make complex tech human, accessible, and emotionally compelling. The initiative is a unique reminder that how we talk about AI may shape how we use it.



Read more here.

John Thickstun, an assistant professor at Cornell, and filmmaker Sophie Barthes collaborate on a screenplay. | Source:  Stanford HAI News
John Thickstun, an assistant professor at Cornell, and filmmaker Sophie Barthes collaborate on a screenplay. | Source:  Stanford HAI News

SOME AI TOOLS TO TRY OUT:


  • Beautiful AI – Instantly design stunning presentations with AI.

  • Cove Apps – A visual workspace where you can build custom AI-powered interfaces.

  • ElevenLabs – Just launched a text-to-bark model for dogs.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Another Crazy Day in AI: An Almost Daily Newsletter

Hello, AI Enthusiasts.


Get ready to wrap up the week with some intriguing AI updates!


A recent report reveals a striking divide in AI perception: experts love AI. The public? Not so much. But both agree: We need more control and better oversight.


Google’s Kent Walker explores how AI can innovate responsibly without trampling creators' rights. The question remains: Can AI play fair with artists?


And speaking of creativity, Runway just dropped Gen-4, giving creators new tools to play with. Now they can keep characters consistent and control scenes without losing their minds.


Time to power down. AI will keep the lights on.


Here's another crazy day in AI:

  • Public vs. Experts: Who’s right about AI?

  • Google & Alphabet’s Global Affairs Chief on creative content and AI training

  • Unlocking new storytelling possibilities with Runway Gen-4

  • Some AI tools to try out


TODAY'S FEATURED ITEM: AI Through Two Lenses


A robotic scientist in a classic white coat with 'AI Scientist' on its back stands beside a human scientist with 'Human Scientist' on their coat, looking towards the AI Scientist.

Image Credit: Wowza (created with Ideogram)


Who’s more excited about artificial intelligence—the experts building it or the people living with it?


A new Pew Research Center report uncovers a deep divide in how AI is perceived. AI experts see it as a force for progress, while the public remains wary. Yet, despite their differing levels of enthusiasm, both groups agree on two major concerns: the need for more personal control over AI and the fear that government oversight won’t be strong enough. The report, authored by Colleen McClain, Brian Kennedy, Jeffrey Gottfried, Monica Anderson, and Giancarlo Pasquini, highlights these contrasts and unexpected points of alignment.


Source: Pew Research Center
Source: Pew Research Center

Where do they stand?

  • Experts are feeling hopeful. Over half of the AI experts surveyed (56%) expect AI to benefit the U.S. over the next two decades. Only 17% of the public shares that optimism.

  • When it comes to personal impact, views diverge even more. A full 76% of experts say AI will improve their own lives, while only 24% of Americans agree—and 41% believe it’ll do more harm than good.

  • Perspectives on work differ significantly. Nearly three-quarters of experts believe AI will enhance jobs. Just 23% of the public sees that outcome as likely.

  • Trust is low across the board in some areas. Whether it’s AI’s influence on journalism or elections, only about 1 in 10 people—regardless of expertise—see it as a positive.

  • Most want a say in how AI evolves. More than half of both groups believe people should have more input on how AI is used in society.

  • Concerns about regulation run deep. A shared worry: that government oversight will fall short in keeping AI risks in check.

  • There’s a noticeable gender gap. Among experts, 63% of men believe AI will have a positive impact on the country—compared to just 36% of women. A similar pattern appears in public opinion.


Source: Pew Research Center
Source: Pew Research Center

The report doesn’t suggest one group is “right” and the other “wrong,” but it does offer a revealing snapshot of how differently people view the same technology depending on their proximity to it. For those building AI, the focus is often on potential—how these systems can improve lives, streamline work, and push boundaries. But for those encountering AI from the outside, the uncertainties are harder to ignore. Questions about job security, misinformation, and surveillance loom large.


Still, it’s not all division. The shared call for stronger oversight and greater public involvement points to a common concern for responsible development. Whether optimism or skepticism wins out, both perspectives will shape how AI becomes part of everyday life. Navigating this future will take listening—especially to those who feel left out of the conversation.




Read the full report here.

OTHER INTERESTING AI HIGHLIGHTS:


Google & Alphabet’s Global Affairs Chief on Creative Content and AI Training

/Kent Walker, President of Global Affairs, Google & Alphabet, on  The Keyword by Google


As AI continues to reshape industries, the question of how creative content is used in training AI models becomes increasingly critical. Kent Walker explores the delicate balance between innovation and protecting creators' rights, discussing industry standards, responsible content acquisition, and emerging collaborations between AI developers and content publishers. From provenance tools like SynthID to ethical AI training practices, this article outlines the key principles guiding the future of AI-powered creativity.



Read more here.


Unlocking New Storytelling Possibilities with Runway Gen-4

/Runway


Runway's Gen-4 introduces a groundbreaking leap in AI-driven media generation, offering precise control over characters, locations, and objects across scenes. This next-gen model enables filmmakers, designers, and content creators to maintain consistency in style and cinematography without additional fine-tuning. With capabilities like infinite character consistency, physics-based realism, and production-ready video quality, Gen-4 is redefining what’s possible in AI-powered storytelling and visual effects.



Read more here.

SOME AI TOOLS TO TRY OUT:


  • Findr – Organize and search info across apps, links, notes, and files.

  • Recall – Instantly connect what you're reading with what you’ve saved before.

  • Proxy – A smarter web-browsing agent built to handle tasks better than OpenAI’s Operator.


That’s a wrap on today’s Almost Daily craziness.


Catch us almost every day—almost! 😉

EXCITING NEWS:

The Another Crazy Day in AI newsletter is on LinkedIn!!!



Wowza, Inc.

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.





Copyright Wowza, inc 2025
bottom of page