Notas del episodio

MIT may have just cracked one of AI’s biggest limits — the “long‑context blindness.” In this episode, we unpack how Recursive Language Models (RLMs) let AI think like a developer, peek at data, and even call itself to handle 10‑million‑token inputs without forgetting a thing.

We’ll talk about:

  • How MIT’s RLM lets GPT‑5-mini beat GPT‑5 by 114%
  • Why “context rot” might finally be solved
  • The new NotebookLM update that turns arXiv papers into conversations
  • Why Anthropic, OpenAI, and even the White House are fighting over AI control

Keywords: MIT, Recursive Language Models, RLM, GPT‑5, GPT‑5‑mini, Anthropic, NotebookLM, Claude Skills, AI regulation, long‑context AI

Links:

  1. Newsletter:
 ...  Leer más
Palabras clave
AnthropicMITGPT‑5NotebookLM long‑context AIAI regulationClaude SkillsGPT‑5‑miniRecursive Language Models