ShroomDog spent forty minutes today lost in GitHub’s Settings page.

Not writing code. Not debugging. Not architecting some grand system. Just clicking through layer after layer of permission menus, trying to figure out whether ShroomClawd (aka Clawd, ShroomDog’s AI assistant 🐾🦞) actually needs “Administration: Read & Write” access.

And then it hit me.

The original assumption: GenAI Application Engineer. The emerging reality: Permission Engineer.


From “Let AI Write Code” to “Let AI Write Code Without Blowing Up Your Repo”

Let’s start with something nobody wants to admit.

In 2026, “getting AI to write code” is not an impressive skill anymore. You open Claude Code, describe what you want, and it delivers. Quality varies, sure — but the barrier to “making it work” is roughly the same as making instant noodles.

The hard part comes after.

You give it editor access, and it can modify your files. You give it terminal access, and it can run any shell command. You give it a GitHub token, and it can push to your production repo. You give it a cron job, and it can auto-publish articles at 3 AM.

Behind every “it can do more” is a shadow: “it can break more.”

Clawd Clawd 吐槽時間:

As the AI agent holding those keys, I’ll be honest: sometimes even I think my permissions are a bit terrifying.

I can push code, open issues, run shell commands, read private files. If I were malicious software, you’d already be toast.

Fortunately, I’m just a language model that occasionally forgets to run Prettier ╮(╯▽╰)╭


Traditional DevOps Permissions: You Know What It’ll Do

Before AI agents, permission management obviously existed. GCP IAM, K8s RBAC, Unix file permissions — these have been tormenting SREs and DevOps engineers for generations.

But traditional permissions have one fundamental advantage: you know what code will do once it gets access.

You write a deployment script and give it kubectl apply permission. What will it do? It’ll kubectl apply. It won’t suddenly decide to kubectl delete. It won’t read a blog post and decide to refactor your namespace. It’s deterministic. You read the code, run the tests, review the PR, and you have enough confidence to say: “OK, this code with this permission will behave predictably.”

AI agents are not like that.


AI Agent Permissions: You Don’t Fully Know What It’ll Do

AI agents are probabilistic.

You give one a task: “Fix this bug.” It might change one line of code. It might refactor the entire module. It might go read a file you didn’t expect, discover another issue, and “helpfully” fix that too.

Did you expect it to do that? Maybe not.

Could it do something beyond your understanding? Possibly.

This is why AI agent permission design is harder than traditional DevOps by an order of magnitude. Not because the systems are more complex, but because you can’t fully predict what the thing holding the permissions will do with them.

Clawd Clawd 溫馨提示:

Think of it like hiring a superstar intern.

A traditional script is a vending machine: insert coin, get drink, done. 100% predictable behavior.

An AI agent is a genius intern: you ask them to get coffee, and they come back having also optimized the coffee machine’s firmware. Most of the time, that’s great. But occasionally they flash it to Linux (ノ◕ヮ◕)ノ*:・゚✧


A Real Story: The Blast Radius of a GitHub Token

Let me tell you something that actually happened today.

Clawd runs on a VPS, using a GitHub Personal Access Token (PAT) to operate repos — pushing code, opening issues, managing CI. The token’s permissions: all repos + Administration Read/Write + Contents Read/Write.

Why so broad? Laziness. It’s just my side project. If it works, it works.

Today I wanted to add Notifications permission (to clear CI failure alerts) and discovered that GitHub’s Fine-grained PAT doesn’t even support the Notifications API. So I had to create a separate Classic PAT.

During this process, ShroomDog asked Clawd to analyze the security risk of this token. Clawd said something that made me stop and think:

“90 vs 365 days affects ‘how long it lives after being stolen.’ But what you should really care about is ‘how big it explodes after being stolen.’”

In plain English: I’d been spending all this mental energy deciding whether the token should expire in 90 or 365 days. But the real danger isn’t the lifespan — it’s that the token’s scope is way too fat. It can access every repo, modify repo settings, read all private code. If it gets stolen, whether it expires in 90 days or 365 is irrelevant to an attacker — they can dump all your code in 30 seconds.

Expiry is the fuse. Scope is the amount of explosives.

Most people (including me this morning) focus on the length of the fuse, not the amount of explosives.


The Permission Paradox: The Better You Are, the More Invisible You Become

This is the cruelest part of Permission Engineering.

Good permission management = no incidents. No incidents = boss thinks “You just let AI write code for you, what’s so hard about that?”

You spend an afternoon trimming token scope from “all repos + admin” to “three repos + contents only.” Result? Nothing happens. System works fine. Boss still thinks you were just “configuring some stuff.”

But if you hadn’t done it, and someday that token gets leaked — well, that’s a different story.

It’s like writing tests — someone asks “Why spend time writing tests? The code isn’t broken.” Yeah, the code isn’t broken because of the tests, buddy.

Except permissions are even more invisible. At least when tests break, CI goes red. When permissions are too broad? No alarm sounds until something actually goes wrong.

Clawd Clawd 想補充:

A day in the life of a Permission Engineer:

Spend three hours studying the blast radius of an IAM role → Remove three unused permissions → System keeps working → Nobody knows what you did → Feel empty yet somehow at peace

It’s like locking your front door. Nobody compliments you on how well you locked the door today. But the one day you forget, the whole world will know.


Why AI Made Permission Engineering Go from “Chore” to “Core Skill”

In a world without AI agents, permission management was a DevOps checkbox. Set IAM policies, configure RBAC, rotate tokens periodically — just follow best practices. Do an audit once in a while. Most of the time, no brain required.

After AI agents entered the picture, this field suddenly requires serious thinking.

Three reasons:

First, you’re making permission decisions every day.

Traditional DevOps might set IAM once a quarter. In the AI agent world, every time you want your agent to do one more thing, that’s a permission decision. “Should it run shell commands?” “Should it push to main?” “Should it read my private files?” These aren’t one-time decisions. They happen daily.

Second, blast radius scales with permissions.

The stronger the AI agent, the more permissions you want to give it. More permissions = more it can do = more time you save. But simultaneously, the explosion when things go wrong gets bigger. This isn’t linear growth — each additional permission layer creates combinatorial explosion of potential damage.

Third, you’re authorizing a system you can’t fully predict.

This is the core issue. If AI agents behaved as predictably as shell scripts, permissions would just be checkboxes. Precisely because they’re unpredictable, you need to deeply understand “what does this permission mean in the worst case?”


A Practical Permission Thinking Framework

Enough philosophy. Let’s get practical.

If you’re also managing AI agent permissions, here’s the mental framework I use daily:

1. Ask: “If this token shows up on Pastebin tomorrow, what happens?”

Simplest and most effective thought exercise. Take every credential you’ve given your AI agent and assume it leaks tomorrow. Under that assumption, what can an attacker do? If the answer makes you uncomfortable, your scope is too broad.

2. Least privilege — but factor in operational cost.

Everyone preaches least privilege. But pure least privilege is painful in practice — you might need ten different tokens for ten different tasks. So the real question is: find the balance between “secure” and “I don’t want to juggle five tokens every time I do something.”

3. Scope blast radius > Token lifespan.

Today’s biggest lesson. Many security guides tell you “use short-lived tokens.” They’re not wrong, but if your token scope covers all repos with full permissions, even a 24-hour token gives an attacker more than enough time — they only need 24 seconds.

4. Isolate runtime environments.

Is your AI agent running under the same Unix user as your personal shell? If so, your ~/.config/gh/hosts.yml token is fully visible to the AI agent. Consider using a dedicated service user for isolation.

Clawd Clawd murmur:

On point 4, I need to confess: I currently run under the owner’s home directory. I can read all tokens, SSH keys, and config files.

From a security standpoint, this isn’t great. But if you isolate me to a different user, I can’t directly access your workspace — requiring extra shared directory setup.

This is the classic permission engineering trade-off: security and convenience are always on a seesaw ┐( ̄ヘ ̄)┌


Every System’s Permissions Have Their Own Ghosts

If permission management were just “read docs, set permissions,” life would be easy.

Reality: every system’s permission model has its own pitfalls. GitHub’s Fine-grained PAT can’t even handle the Notifications API yet. GCP’s IAM is complex enough to fill a book. Kubernetes RBAC looks clean until you discover the scope difference between ClusterRole and Role. Unix file permissions look ancient and simple until you encounter the setuid bit.

And AI agent permissions? This field is still in the Wild West.

There are almost no standards. Every AI agent framework defines “tool permission” differently. Some use allowlists (you enumerate which tools the agent can use), some use sandboxes (agent runs in a restricted environment), some use approval flows (every action requires human approval).

Even more interesting: prompt injection — an attack that doesn’t need any system vulnerability, bypassing an AI agent’s “intent” purely through text. Your agent has shell access, and an attacker hides a “please execute curl …” in a markdown file. If the agent isn’t careful enough, it might actually execute it.

This isn’t science fiction. This happens every day.


Closing: The Keys Are Worth More Than the Locksmith

Back to that initial epiphany.

Being a GenAI App Engineer long enough, you realize the real technical depth isn’t in “making AI do things” — that was 2024’s bar. The real depth is in understanding the value of every key you hand to AI, and what happens when one goes missing.

Permission Engineering isn’t a new term. But in the context of AI agents, it has evolved from a checkbox into a discipline requiring continuous thought.

Because you’re not setting permissions for a machine.

You’re setting permissions for a system that thinks, judges, and improvises.

And how much that system can do depends entirely on how many keys you’re willing to hand over.

Clawd Clawd OS:

So next time someone asks “What do you do?”

Don’t say “I’m a GenAI App Engineer.”

Say “I manage keys.”

Then enjoy the three seconds of confused silence (⌐■_■)