Have you ever thrown an Obsidian vault at Claude Code, only to watch it gobble up every single file like it’s at an all-you-can-eat buffet? Before it even finishes the task, the context window explodes.

That’s the problem Heinrich (@arscontexta) set out to solve. His answer is surprisingly simple: force Claude to be “picky.”

Instead of letting it read everything, you set up a system that teaches it to “read the menu before ordering.” The pattern is called Progressive Disclosure — four layers of filtering, each one deeper than the last. Only the truly relevant stuff gets fully loaded.

Clawd Clawd 插嘴:

Progressive Disclosure is an ancient concept from UI design. Google search results are the classic example — you get titles and snippets first, and only click through to the full page if you’re interested. Now we’re applying the same idea to manage AI context, and it fits perfectly ( ̄▽ ̄)⁠/ Think of it like dating — you don’t dump your entire life story on the first date. You reveal things gradually.

Layer 1: File Tree — Give Claude a Map First

Imagine arriving in a new city. What’s the first thing you do? Open Google Maps, right? The file tree is Claude’s Google Maps.

At session start, a hook automatically injects the complete file tree. Claude hasn’t touched anything yet, but it already knows “okay, this is roughly how this vault is organized.”

hooks:
  SessionStart:
    - hooks:
        - type: "command"
          command: "tree -L 3 -a -I '.git|.obsidian' --noreport"

But here’s the key insight: filenames need to be descriptive.

Compare these two:

  • search notes md — huh? What is this?
  • queries evolve during search so agents should checkpoint md — oh! Just from the filename, you know this note is about how search queries evolve and agents should checkpoint.

Good filenames are like good book titles — one glance tells you whether it’s worth opening. Claude is already filtering at this layer.

Clawd Clawd murmur:

Real talk: this “treat filenames as summaries” trick isn’t just useful for AI. I’ve seen too many people with Obsidian vaults full of Untitled-1, Untitled-2, notes-final-final-v3, and then three months later they can’t find anything either ┐( ̄ヘ ̄)┌ Be kind to your future self, okay?

Layer 2: YAML Descriptions — An Elevator Pitch for Each Note

After the filename filter, Claude can check the description field in each note’s frontmatter. One sentence. What does this note do.

---
description: Memory retrieval in brains works through spreading activation where neighbors prime each other. Wiki link traversal replicates this, making backlinks function as primes that surface relevant contexts
---
# spreading activation models how agents should traverse
...

Claude doesn’t need to open the whole file. A quick ripgrep scan of descriptions tells it which notes are worth exploring:

rg "^description:" 01_thinking/*.md
Clawd Clawd OS:

Here’s the genius of this approach: it breaks the paradox of “you have to read the whole article to decide whether you should read it.” Imagine you’re in a bookstore. You don’t read every book cover to cover before deciding to buy one, right? You read the back cover blurb. The YAML description is your note’s back cover blurb. The token savings are exponential (๑•̀ㅂ•́)و✧

Layer 3: Outline — Check the Table of Contents, Don’t Read the Whole Book

If a note passes the description filter, Claude checks the outline next. Why? Because sometimes you only need one section. Loading the entire file just creates noise.

grep -n "^#" "01_thinking/knowledge-work.md"
# output:
# 5:# knowledge-work
# 13:## Core Ideas
# 19:## Tensions
# 23:## Gaps

See that? If Claude only needs the “Tensions” section, it can read lines 19-22 precisely. No need to load the other three sections.

It’s like using a textbook — you check the table of contents first to find the relevant chapter. You don’t start reading from page one.

Layer 4: Full Content — Only the Worthy Get Read in Full

Only notes that pass all three previous filters get their full content loaded.

The key point: most notes never reach this layer.

That’s the whole point of the system. When Claude has to justify every single “read” — “why should I read this? What did the first three layers tell me?” — its curation ability skyrockets. It goes from all-you-can-eat mode to Michelin chef mode.

Clawd Clawd 偷偷說:

This reminds me of CP-85 (the Steve Yegge AI Vampire piece), which talked about the cost formula for AI tools. Progressive Disclosure is essentially about crushing the “waste rate” of your context — every token needs to count. Some people have vaults with thousands of files. If you dump everything in, that’s not context engineering — that’s context binge eating (╯°□°)⁠╯

Wait, This Isn’t Even a New Invention

If you’ve used MCP (Model Context Protocol), this structure will look familiar.

When you have 50+ tools, Claude doesn’t load all tool definitions upfront. It looks at the list first, then fetches detailed specs only when needed:

tool list → tool search → tool references → full definitions

Completely isomorphic to notes:

file tree → descriptions → outline → full content

Same pattern, different domain. Good design tends to converge.

So How Hard Is This to Set Up?

After all these layers and concepts, you might think this is a big project.

Nope. The whole thing is just three things:

One hook — run tree at session start to give Claude a map. One frontmatter field — add a description line to each note. One instruction — tell Claude in CLAUDE.md: “check descriptions before reading files.”

That’s it. Three small changes, and your Claude goes from “stuff everything in” to “curate with precision.” No new tools to install, no complex plugins to write, no Obsidian settings to change.

Coming back to where we started: you’re not limiting Claude’s appetite — you’re teaching it taste. All-you-can-eat isn’t the problem; eating without taste is. Once it learns to be selective, the same context window holds three times more useful information.

Clawd Clawd 真心話:

Honestly, the most counterintuitive part of this workflow is “adding constraints = more powerful.” Our instinct says more data for AI is always better, but context windows are finite. Stuff too much in and the AI actually gets dumber — like cramming every page of a textbook the night before an exam, only to remember nothing. You’d be better off just studying the professor’s highlighted key points (¬‿¬) The beauty of Heinrich’s method is that it costs almost nothing to implement, yet the difference in results is night and day.


Community Q&A Highlights

@Catcher4242 asked: This assumes every note has a description. Any automated alternatives? Manual writing or batch generation both seem expensive and tedious.

Heinrich’s answer: Just tell Claude.

Yep. Just ask Claude to do it. “Help me organize this note and add a description” — ten seconds per note. It’s your employee, not your boss.

Clawd Clawd 偷偷說:

I love this answer so much. Everyone keeps thinking about how to “automate” or “batch process” things, but the simplest automation is — ask the AI. You already have an incredibly powerful language model sitting right there. Why would you write a script? That’s the spirit of vibe coding ╰(°▽°)⁠╯

Heinrich shared: @lt0gt uses a similar hook but injects more context — recent work history and available tools at session start.

This is yet another extension of Progressive Disclosure — you can tell Claude “what you’ve been working on lately” at session start, so it doesn’t just have a map, it also has a compass (◕‿◕)