inference-scaling
1 articles
MIT Research: Making LLMs Recursively Call Themselves to Handle 10M+ Tokens
When you stuff too much into a context window, models get dumber — that's context rot. MIT proposes Recursive Language Models (RLMs), letting LLMs recursively call themselves in a Python REPL to handle massive inputs. GPT-5-mini + RLM beats vanilla GPT-5 on hard tasks, and it's cheaper too.