- AI love AI
- Posts
- Midjourney officially enters the competitive AI video race
Midjourney officially enters the competitive AI video race
PLUS: Meta poaches OpenAI researchers and Napster’s AI

GM AI lovers,
The competitive AI video space just got a major new player, as image-generation giant Midjourney has officially launched its first video model. The new tool allows users to animate their images into short clips with simple prompts.
The new model claims to be over 25 times cheaper than competing tools, making video creation more accessible than ever. But is this just Midjourney's entry into the video race, or is it the first step in its larger ambition to build real-time, interactive worlds?
In today’s AI recap:
Midjourney’s first AI video model is here
HeyGen's new agent creates entire videos
Meta poaches four key OpenAI researchers
Napster’s AI pivot with holographic companions
Midjourney's Next Act: Video
The Recap: AI image titan Midjourney has officially entered the competitive video generation space, launching its first video model that lets users turn static images into short, animated clips.

Unpacked:
The new "Image-to-Video" feature lets you animate any image—whether generated in Midjourney or uploaded—by simply describing the desired motion.
Creators can fine-tune animations with "high" and "low" motion settings to control the intensity of movement, from subtle ambient shifts to dynamic camera action.
Midjourney claims the new model is over 25 times cheaper than competing video tools, making animated content creation significantly more accessible.
Bottom line: This launch dramatically lowers the barrier for creators to produce simple video content directly from their existing image workflows. It also marks a clear step in Midjourney's larger ambition to build real-time, interactive world simulations.
HeyGen's New Video Agent
The Recap: HeyGen has unveiled its new Video Agent, a Creative Operating System that autonomously creates entire video projects from a single prompt. The agent-based platform handles everything from scriptwriting and AI actors to the final edits.
Unpacked:
The system operates like a full production team, autonomously handling the entire workflow including scriptwriting, visual selection, voiceovers, and editing.
The launch was announced by the CEO, showcasing how the platform transforms a simple idea from text or an image into a cohesive, publish-ready video.
This technology introduces what HeyGen calls agentic content creation, shifting AI's role from assistant to full producer—you can see a demo of it in action.
Bottom line: This launch marks a significant step beyond AI copilots toward fully autonomous creative systems. Tools like this are poised to dramatically lower the barriers to high-quality video production for businesses and creators.
Meta's OpenAI Poaching Spree
The Recap: The AI talent war is intensifying as Meta reportedly poached four key researchers from OpenAI. These hires will join Meta's new unit focused on building superintelligence.

Unpacked:
The hires include Trapit Bansal, a key contributor to OpenAI's 'o1' reasoning model, and the trio of researchers who established the company's Zurich operations.
This move comes just a week after OpenAI CEO Sam Altman publicly stated that Meta’s poaching attempts had failed to attract any of his top talent.
Researcher Lucas Beyer confirmed the move on X, but disputed claims of massive pay packages, calling the rumored $100M bonuses "fake news."
Bottom line: This hiring spree significantly accelerates Meta's push into superintelligence by acquiring proven talent from a top competitor. The moves underscore that the race for advanced AI is not just about compute or data, but also a fierce battle for the world's most brilliant minds.
Napster's AI Reinvention
The Recap: The iconic Napster brand is making a surprise comeback, pivoting to AI with a new platform for skilled AI avatars and a 3D holographic device for immersive conversations.
Unpacked:
The new Napster Companion platform lets you interact with thousands of specialized AI avatars, each possessing unique skills like career strategy or cooking.
A new 3D holographic device called Napster View aims to make interactions with AI companions more immersive and life-like.
These AI companions can use tools, see what’s on your screen, and remember past conversations to provide continuous, personalized support.
Bottom line: Napster's pivot suggests the next wave of AI interfaces will move beyond simple text boxes toward more embodied and visual interactions. This shift aims to make AI a more tangible and specialized collaborator in our daily professional and personal tasks.
The Shortlist
Google launched its Gemma 3n family of open models, designed to bring powerful multimodal capabilities like real-time video and audio analysis to mobile and edge devices.
WhatsApp began rolling out a new AI-powered feature that generates summaries of unread private chat messages to help users quickly catch up on conversations.
Black Forest Labs released FLUX.1 Kontext [dev], an open-weight, state-of-the-art image editing and composition model designed to run efficiently on consumer hardware.
Suno announced the acquisition of WavTool, integrating the startup's browser-based digital audio workstation to give musicians more advanced music creation and editing capabilities.