
Hello, AI Enthusiasts.
Almost Friday—AI news to wrap up your Thursday night! 🚀
Microsoft’s latest Global Online Safety Survey found that 73% of people struggle to tell real from AI-generated images—even after taking a quiz! To tackle this, Microsoft is teaming up with Childnet and launching a Minecraft game, "CyberSafe AI: Dig Deeper," to educate young minds on AI safety. 🛡️
Meanwhile, Apple is teaming up with Alibaba to roll out AI features in China. And over at xAI, Elon Musk teases a new chatbot release within weeks, claiming it has “scary smart” reasoning abilities. 🤯
Here's another crazy day in AI:
Microsoft’s plan to tackle AI misinformation
Apple and Alibaba team up for AI in China
Elon Musk says Grok 3 AI is ‘scary smart’ and coming soon
Some AI tools to try out
TODAY'S FEATURED ITEM: Can You Spot AI Fakes? Most People Can’t.

Image Credit: Wowza (created with Ideogram)
Are you sure that image is real?
With AI-generated content becoming more sophisticated, spotting what’s real and what’s not is getting harder. Microsoft’s latest Global Online Safety Survey shows that 73% of people find it difficult to tell the difference—even after taking a quiz designed to test their ability to recognize AI-generated images. As AI continues to evolve, so do the risks, from scams and misinformation to deepfakes and online abuse.
Microsoft’s Chief Digital Safety Officer, Courtney Gregoire, shares insights from the 2025 Safer Internet Day report, exploring how AI is reshaping online risks and the steps being taken to address them. The findings highlight a growing need for media literacy, digital safety education, and responsible AI use.
What the report reveals
AI usage is rising—51% of people report using AI tools this year, up from 39% in 2023.
Identifying AI-generated content is a challenge—participants correctly spotted only 38% of deepfake images in Microsoft’s “Real or Not” quiz.
The biggest concerns about AI include:
Online scams and fraud (73%)
Cyberbullying and harassment (73%)
The spread of deepfake content (72%)
Microsoft is launching new initiatives to help people navigate AI risks, including:
A collaboration with Childnet to educate young users on AI safety.
CyberSafe AI: Dig Deeper, a Minecraft game designed to teach AI literacy through interactive learning.
A digital safety guide for older adults, developed in partnership with AARP’s Older Adults Technology Services (OATS).
Microsoft continues to advocate for responsible AI policies that promote both innovation and safety.
The increasing difficulty in distinguishing between real and AI-generated content raises important questions about trust in digital spaces. When misinformation spreads more easily, it impacts not just individuals but entire communities. Deepfake technology, for example, is already being used in scams, identity theft, and even political deception. Without the right tools to verify content, people are left more vulnerable to manipulation.
Addressing these risks requires a combination of education, policy changes, and technological solutions. While companies like Microsoft are working on AI transparency tools and safety initiatives, individuals also need to develop stronger media literacy skills. Recognizing AI-generated content is becoming an essential part of digital awareness, and staying informed is the first step toward navigating this new landscape responsibly.
Read the full article here.
Read the full report here.
OTHER INTERESTING AI HIGHLIGHTS:
Apple and Alibaba Team Up for AI in China
/Dylan Butts on CNBC
Apple has chosen Alibaba as its AI partner for iPhones in China, Alibaba Chairman Joe Tsai confirmed at the World Governments Summit. The partnership will help Apple navigate China’s strict AI regulations while integrating AI-powered features into its devices. This move comes as Apple faces increasing competition from local brands like Huawei, which have already introduced AI-driven smartphones. Analysts suggest the collaboration could be key to Apple's AI expansion in China, where government policies require AI models to comply with strict content and approval guidelines.
Read more here.
Elon Musk Says Grok 3 AI is ‘Scary Smart’ and Coming Soon
/Tsarathustra (@tsarnick) on X (formerly Twitter)
Elon Musk has announced that Grok 3, the latest version of xAI’s chatbot, will be released within “a week or two” and claims it has “scary smart” reasoning capabilities. Speaking at the World Governments Summit in Dubai, Musk emphasized the need for AI models focused on truth-seeking to prevent dystopian outcomes. He also predicted that AI, combined with humanoid robots, could create an economy where goods and services become abundant, potentially making money obsolete. Alongside AI developments, Musk discussed global economic policies, the dangers of overregulation, and his vision for The Dubai Loop, a high-speed underground transport system.
Check it out here.
SOME AI TOOLS TO TRY OUT:
RabbitHole - Explore any topic with branched learning and follow-up questions.
Learn Copywriting - Get AI feedback on your copywriting skills.
Tempo - A visual React editor for developers and designers.
That’s a wrap on today’s Almost Daily craziness.
Catch us almost every day—almost! 😉
EXCITING NEWS:
The Another Crazy Day in AI newsletter is now on LinkedIn!!!

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.
Comments