• AI love AI
  • Posts
  • Nvidia acquires AI chip rival Groq for $20B

Nvidia acquires AI chip rival Groq for $20B

PLUS: Anduril’s EagleEye helmet, Gemini controls Waymo, MiniMax releases M2.1

Good morning AI lover,

Nvidia is strengthening its grip on the hardware market with a massive $20 billion all-cash deal to buy rival Groq. The agreement secures Groq’s chip assets for the tech giant while explicitly leaving the startup’s cloud business behind.

By taking a major player off the board, the move grants Nvidia immediate access to specialized architecture. With the leader using its capital to consolidate power, is there any room left for competitors to climb the hardware ladder?

In today’s AI recap:

  • Nvidia acquires chip rival Groq for $20B

  • Anduril’s AI helmet offers 200-degree vision

  • Gemini controls Waymo cabin settings

  • MiniMax releases M2.1 coding model

Nvidia acquires Groq in massive $20B deal

The Recap: Nvidia cements its hardware dominance by acquiring AI chip startup Groq in a record-breaking $20 billion all-cash transaction. This deal secures Groq’s chip assets while excluding its cloud services division.

Unpacked:

  • The massive financial commitment serves as an all-cash transaction that dwarfs Nvidia’s previous record for company acquisitions.

  • Terms of the agreement transfer ownership of all chip assets to Nvidia, though the deal explicitly excludes Groq’s cloud business.

  • Absorbing this rival allows Nvidia to further cement its dominance in the AI hardware space against potential competitors.

Bottom line: This acquisition removes a significant competitor from the board and grants Nvidia immediate access to specialized chip technology. Competitors face an increasingly steep climb as the market leader utilizes massive capital to consolidate the hardware landscape.

Anduril’s new AI helmet offers 'superhuman' vision

The Recap: Anduril has unveiled EagleEye, an AI-driven helmet system that fuses data from drones and sensors to give soldiers a 200-degree 3D view of the battlefield.

Unpacked:

  • The system replaces bulky night vision goggles with lightweight swappable glasses that sit inside the helmet to support day, night, and augmented reality modes.

  • AI algorithms fuse real-time data from drones, teammates, and onboard cameras to build a broad 3D view that lets operators see around corners and identify threats behind cover.

  • Integrated ear flaps amplify distant conversations and pinpoint gunshots on the heads-up display while sensors detect invisible radio signals and laser locks.

Bottom line: This technology shifts situational awareness from simple optical enhancement to comprehensive sensory fusion where software defines visibility. Integrating complex data streams directly into wearable gear significantly increases safety and effectiveness for operators in chaotic environments.

Google's Gemini now controls Waymo robotaxis

The Recap: Waymo is testing a new integration of Google’s Gemini directly into its robotaxis, enabling passengers to control cabin settings and receive answers to questions through a conversational voice interface.

Unpacked:

  • Analysis of the Waymo Ride Assistant Meta-Prompt reveals the system utilizes a Gemini 2.5 Flash model to execute voice commands for adjusting temperature, fan speed, and music playback.

  • Rigid safety protocols prevent the assistant from accessing driving functions like acceleration or route changes, ensuring the Waymo Driver retains sole authority over vehicle motion.

  • The system prioritizes privacy by ensuring microphones remain inactive until you specifically engage the assistant, creating a clear operational boundary between the AI companion and the driving system.

Bottom line: This deployment shifts the autonomous ride experience from a passive commute to a personalized, voice-controlled environment. Seamlessly integrating natural language assistants into physical spaces removes the friction of navigating touchscreens for passengers in motion.

MiniMax releases M2.1 coding model

The Recap: MiniMax released M2.1, a new AI model optimized for complex multi-language coding tasks and autonomous tool execution that claims to outperform top competitors.

Unpacked:

  • The model delivers enhanced capabilities in languages like Rust, Java, and C++ while addressing industry-wide mobile development weaknesses with native Android and iOS support.

  • Performance tests show the system achieving an 88.6 average score on the VIBE benchmark, utilizing an innovative Agent-as-a-Verifier paradigm to assess application logic and aesthetics in real-time.

  • New "digital employee" features enable autonomous tool execution within frameworks like Cline and Factory AI to complete end-to-end office administration and data analysis tasks.

Bottom line: This release signals a shift toward AI that builds complete, functional applications rather than just generating isolated code snippets. Developers gain a powerful alternative for complex agentic workflows that require deep reasoning across multiple programming languages.

The Shortlist

BU-30B-A3B debuted as Browser Use's first open-source LLM, featuring 30 billion parameters (with 3 billion active) designed to execute 200 tasks per dollar.

FlashPortrait introduced a new video diffusion transformer capable of generating infinite-length, ID-preserving portrait animations with inference speeds up to six times faster than previous methods.

ComposioHQ shared a repository of "Awesome Claude Skills," providing structured prompts and logic for integrating the model with tools like AWS, Playwright, and Microsoft Word.

Google detailed its major 2025 AI milestones in a new recap, highlighting the launch of the Pixel 10, Veo video generation, and the Nano Banana image editing tool.