Another Crazy Day in AI: You Can Now Make Your Digital Twin Move Like You Do
- Wowza Team

- Aug 29
- 4 min read

Hello, AI Enthusiasts.
Just dropping by with your quick AI pulse check for the week.
HeyGen has introduced Avatar IV, a major step forward for its Digital Twin technology. Unlike earlier avatars that mainly mirrored speech, this one is tuned to capture the subtleties of human interaction—expressions, gestures, even those tiny mannerisms that make a video feel personal.
Stanford economists, meanwhile, say younger workers are already feeling the squeeze from AI adoption in the labor market.
And for language learners, Google Translate just got a serious upgrade that turns your phone into both a real-time interpreter and a practice partner.
More soon, but for today, we’ll leave it here.
Here's another crazy day in AI:
HeyGen launches most advanced avatar technology yet
AI adoption linked to job losses for recent grads
Google Translate adds live AI conversations and practice tools
Some AI tools to try out
TODAY'S FEATURED ITEM: Avatar IV and Digital Presence

Image Credit: Wowza (created with Ideogram)
Can a digital version of you feel just as real as the person behind the camera?
HeyGen has introduced an upgrade to its Digital Twin, now powered by Avatar IV, which the company describes as its most advanced avatar model to date. The announcement came through posts from HeyGen's LinkedIn and CEO Joshua Xu, while practical implementation details were shared in a comprehensive guide on the HeyGen Help Center by Avi Yaffe, their Director of Operations & Customer Support. The update aims to make video creation more authentic and less time-intensive. Instead of standard avatars that only mimic speech, this version is designed to capture the subtleties that make communication feel personal—gestures, expressions, and mannerisms.
How the technology operates:
Studies individual speaking patterns and modifies delivery based on script type
Records complete body language including natural hand gestures and facial responses
Compatible with common recording equipment from phones to webcams
Takes approximately 10-20 minutes to process standard resolution videos
Generates up to 100 different visual styles for each avatar created
Needs 2-5 minutes of uninterrupted footage for the initial recording
Implements consent verification protocols during the setup process
Accessible via web platform or through API for technical integration
Preserves personal communication habits across different video content
The immediate appeal is fairly obvious for people who regularly create video content. Think about professionals who record weekly updates, trainers developing course materials, or small business owners who want to maintain a personal touch in their communications without the constant time investment. The technology essentially allows someone to record once and then generate multiple videos with different scripts, potentially saving hours of recording time while maintaining their personal communication style.
What's more intriguing is how this might influence our expectations around digital communication. We're increasingly accustomed to various forms of digital representation, from filtered photos to voice assistants, but video has generally remained more tied to real-time human presence. This technology sits in an interesting middle ground—it's clearly artificial, yet designed to feel authentic. The success of such tools will likely depend on whether audiences accept and trust these digital representations, and whether the nuanced aspects of human communication that build rapport and credibility can truly be replicated. As these technologies become more common, we may need to develop new frameworks for understanding authenticity in professional and personal video communication.
Read the LinkedIn Post here.
Learn how to create one here.
OTHER INTERESTING AI HIGHLIGHTS:
AI Adoption Linked To Job Losses For Recent Grads
/Andrew R. Chow, Correspondent, on TIME
A new Stanford Digital Economy Lab study finds that AI is beginning to reshape the labor market, with younger workers hit the hardest. Employment for 22- to 25-year-olds in AI-exposed professions like software engineering, marketing, and customer service has declined by 16% since late 2022, even as overall employment grew. Researchers say this may reflect early-career employees being more replaceable, while older workers maintain an edge through tacit knowledge and organizational power. The report stresses that AI used for augmentation rather than automation could still drive productivity and prosperity.
Read more here.
Google Translate Adds Live AI Conversations And Practice Tools
/Matt Sheets, Product Manager, on Google Blogs – The Keyword
Google is rolling out new AI-powered features in Translate designed to break down language barriers in real time and improve language learning. Using Gemini’s advanced multimodal reasoning, Translate now enables smooth back-and-forth live conversations in more than 70 languages, even in noisy environments. A new “practice” mode creates tailored exercises to help learners build confidence in listening and speaking, adapting to each user’s skill level and goals. These updates, tested with positive feedback, are now expanding in the U.S., India, and Mexico.
Read more here.
SOME AI TOOLS TO TRY OUT:
That’s a wrap on today’s Almost Daily craziness.
Catch us almost every day—almost! 😉
EXCITING NEWS:
The Another Crazy Day in AI newsletter is on LinkedIn!!!

Leveraging AI for Enhanced Content: As part of our commitment to exploring new technologies, we used AI to help curate and refine our newsletters. This enriches our content and keeps us at the forefront of digital innovation, ensuring you stay informed with the latest trends and developments.




Comments