Natural-Language Agent Harnesses: When an Agent's Soul Moves from Code to Plain Text

A Tsinghua Shenzhen team proposes NLAH (Natural-Language Agent Harnesses): moving agent control logic from code into structured natural language, executed by an IHR runtime. Experiments show harnesses can reshape agent behavior patterns entirely, but more structure doesn't always mean better results. Dan McAteer argues harness engineering matters as much as model capability.

The Truth About World-Class Agentic Engineers — Less Is More

The core message is simple: most people don't fail because the model is weak — they fail because their context management is a mess. The author advocates starting with a minimal CLI workflow and iterating with rules, skills, and clear task endpoints. It's not about chasing new tools; it's about making your agent's behavior controllable, verifiable, and convergent.

The File System Is the New Database: One Person Built a Personal OS for AI Agents with Git + 80 Files

A Context Engineer at Sully.ai built his entire digital brain inside a Git repo: 80+ markdown/YAML/JSONL files, no database, no vector store. Three-layer Progressive Disclosure, Episodic Memory, and auto-loading Skills — so the AI already knows who he is, how he writes, and what he's working on the moment it boots up.

OneContext: Teaching Coding Agents to Actually Remember Things (ACL 2025)

Junde Wu from Oxford + NUS got fed up with coding agents forgetting everything between sessions. So he built OneContext — a Git-inspired context management system using file system + Git + knowledge graphs. Works across sessions, devices, and different agents (Claude Code / Codex). The underlying GCC paper achieves 48% on SWE-Bench-Lite, beating 26 systems. Backed by an ACL 2025 main conference long paper.