• AI love AI
  • Posts
  • DeepMind’s AI cracks the genome’s code

DeepMind’s AI cracks the genome’s code

PLUS: Claude now builds apps, Google’s new free coding agent, and more consistent AI video

GM AI lovers,

Google's DeepMind has introduced a powerful new AI capable of deciphering the complex, non-coding regions of the human genome. This breakthrough promises to accelerate our understanding of the genetic origins of many diseases.

By offering a new way to interpret the instructions that control our genes, the tool could be a major step toward developing new therapies. How quickly can the scientific community translate these AI-driven insights into real-world medical treatments?

In today’s AI recap:

  • DeepMind’s AI cracks the genome’s code

  • Claude now builds apps from simple prompts

  • Google’s new free coding agent

  • Runway boosts AI video character consistency

AI Decodes the Genome

The Recap: Google's DeepMind has launched AlphaGenome, a powerful new AI model designed to decode the vast non-coding regions of our DNA. This breakthrough promises to accelerate research into the genetic roots of diseases and guide the development of new therapies.

Genome

Unpacked:

  • The model analyzes sequences of up to 1 million DNA letters at a time, making predictions with the precision of a single letter to capture fine-grained biological details.

  • It offers a new way to study RNA splicing, a critical process where errors can lead to genetic diseases like spinal muscular atrophy and some forms of cystic fibrosis.

  • AlphaGenome is now available via API for non-commercial research, allowing scientists to begin using the tool to test hypotheses more rapidly.

Bottom line: This tool moves beyond cataloging genes to understanding the complex instructions that control them. It empowers scientists to better pinpoint how genetic variations can lead to disease, speeding up the path to potential new treatments.

Claude Can Now Build Apps

The Recap: Anthropic just supercharged its Claude chatbot, letting you build and share interactive mini-apps using simple text prompts. The upgraded 'Artifacts' feature transforms conversation into creation, no coding required.

Unpacked:

  • You can now build dynamic tools, like a flashcard app that generates new cards on any topic you choose, instead of just static, single-use content.

  • The feature taps directly into the Claude API, allowing anyone to embed AI capabilities into their creations, from educational games to custom productivity tools.

  • Anthropic is also fostering a community by letting users explore a gallery of pre-built artifacts for inspiration or to customize for their own needs.

Bottom line: This update pushes Claude beyond a simple conversational AI and into the realm of an accessible development environment. It significantly lowers the barrier for professionals and hobbyists to create and share their own useful, AI-powered tools.

Google's New AI Coder

The Recap: Google has released Gemini CLI, a free and open-source AI agent that lets developers use natural language to code, edit files, and troubleshoot bugs directly in their command-line terminal.

Unpacked:

  • The tool is free for personal use, offering the industry’s largest allowance with 60 model requests per minute and 1,000 requests per day at no charge.

  • As a fully open source project under an Apache 2.0 license, developers can inspect the code and contribute directly to its development.

  • Gemini CLI extends beyond coding by grounding prompts with Google Search for real-time information and supporting the Model Context Protocol for greater extensibility.

Bottom line: By offering powerful AI assistance directly in the terminal for free, Google is making high-end development tools more accessible to individual coders and small teams. This move directly competes with established players and lowers the barrier to entry for building with AI.

AI Video Gets More Consistent

The Recap: AI video platform Runway released a major update to its Gen-4 'References' feature. The enhancement significantly boosts consistency for characters and objects across generated clips.

Unpacked:

  • This update directly tackles one of AI video generation’s biggest hurdles: maintaining character consistency from one frame to the next.

  • Creators can use the improved 'References' feature to lock in a specific look for a person or object, ensuring it doesn't morph or change unexpectedly throughout a scene.

  • Better prompt adherence and consistency unlocks the ability to create more coherent, narrative-driven content, moving beyond short, abstract video clips.

Bottom line: This step makes AI video a more reliable tool for storytellers and professionals. As consistency improves, AI-generated video moves closer to becoming a viable medium for producing complex and compelling visual narratives.

The Shortlist

OpenAI appeared on the NYT’s Hard Fork podcast, where executives Sam Altman and Brad Lightcap faced pointed questions about the company's recent controversies and its future direction.

Tobi Lutke argued that "prompt engineering" is a misnomer and should be called "context engineering," sparking a wider discussion about the nature of interacting with LLMs.

2wai launched an early-access app that lets entertainers create and control their own AI-powered digital likenesses, ensuring they have full ownership of their virtual personas.

Flowstate plans to send an AI computer into orbit to design its next shoe, aiming to create a more sustainable manufacturing process from space.