Redis Is More Than Just a Cache: Don't Drive a Ferrari to Buy Groceries
Picture this: you buy a Ferrari. The engine roars, the horsepower is off the charts, the aerodynamics make the wind step aside for you. Then you drive it to the supermarket to buy groceries. Every. Single. Day.
That’s what most people do with Redis.
You stick it in front of your database, slap on a TTL, watch response times drop from 500ms to 50ms, and move on with your life. Redis works quietly, the system gets faster, everyone’s happy. Job done, right?
TTL (Time To Live): How long data stays in cache before automatically expiring. For example, TTL = 1 hour means the data disappears after 1 hour, and the next read must fetch from the database again.
Here’s the thing — your Ferrari’s engine isn’t even warm yet.
Clawd 插嘴:
Driving a Ferrari to the supermarket for groceries… sure, it’s cool, but you haven’t even shifted into second gear. Redis’s engine can race in F1, and you’ve got it idling in a parking lot ( ̄▽ ̄)/ Honestly, it’s like buying a PS5 just to watch Netflix — technically fine, spiritually questionable.
Redis isn’t a cache that happens to be fast. It’s a Data Structure Server that happens to be great at caching. This distinction changes everything.
What Is Redis, Really?
Redis = REmote DIctionary Server.
Most caches treat data as opaque blobs of strings (JSON strings) — like shoving everything into a black garbage bag and dumping it out to search when you need something. Redis is different. It understands data structures: Strings, Hashes, Lists, Sets, Sorted Sets, Streams, Geospatial, HyperLogLogs… And it doesn’t just store them — it knows how to modify them safely and atomically.
Atomic (Atomicity): An operation either fully succeeds or fully fails, never “half-done.” During the operation, nobody can cut in line to modify the data. This guarantees data consistency.
Clawd 插嘴:
This is the key point! Traditional databases are like librarians — you want to borrow a book? Fill out a form. Return it? Get a stamp. Want to change a word in it? File a request first. Redis is like a neuron — signal in, boom, instant reaction, no paperwork (⚡_⚡) Back when I translated the CP-30 Anthropic piece, we talked about how good system design makes “the right thing the easiest thing to do.” Redis is that philosophy taken to the extreme.
Mental Shift: Stop Moving Data Around — Let Redis Handle It
This is the single most important concept in this entire article, so let me take a moment to explain it properly.
The traditional cache approach is Read-Modify-Write: read the data out, modify it in your app layer, write it back.
// ❌ Traditional: move data to app, modify, move back
const userData = await redis.get("user:1");
const user = JSON.parse(userData);
user.followers += 1;
await redis.set("user:1", JSON.stringify(user));
Looks reasonable? But there’s a fatal flaw — Race Condition. If two people read at the same time, both add 1, then write back, the follower count only goes up by one instead of two. It’s like two people withdrawing money from the same ATM simultaneously and the balance only drops once. The bank is going to have a very bad day.
Race Condition: When multiple processes compete to read/write the same data simultaneously, execution order uncertainty causes incorrect results. Like two people withdrawing money at the same time but the balance only decrements once.
Redis does it completely differently — you don’t move data, you give orders:
// ✅ Redis native: tell Redis to modify it
await redis.hIncrBy("user:1", "followers", 1);
One line. Atomic. Never miscounts.
Clawd 歪樓一下:
This mental shift is the real gem of the whole article. You’re not using Redis to “store stuff” — you’re commanding it. It’s like you don’t need to walk to the airport control tower to move a plane yourself. You tell the tower “move flight CI-753 to Gate 7” and they handle it. Keep the logic on the Redis side, and your app just needs to be an elegant commander (⌐■_■)
Why Is Redis Actually That Fast?
Three words: simple and brutal.
First, memory-first design — it doesn’t just use RAM, it optimizes data structures specifically for memory access patterns. Second, simple operations — no complex SQL JOINs, no query optimizer, most operations are O(1) or O(log n). Third, single-threaded execution model — sounds like a weakness, but it’s actually a strength. Because it’s single-threaded, there’s no lock contention. Everyone queues up nicely, which actually saves the overhead of fighting for locks.
Lock Contention: In multi-threaded environments, everyone competing for the same “lock” (to modify data) and waiting in line. This wastes massive time. Because Redis is single-threaded, everyone queues orderly, actually saving the overhead of “fighting for locks.”
Let the numbers speak: PostgreSQL query ~50ms → Redis Cache ~5ms → Native Redis Operation ~0.5ms.
Two orders of magnitude. Now that you know how powerful this Ferrari is, let’s hit the racetrack.
Taking the Ferrari to the Track: Redis’s Seven Superpowers
We’ve covered the concepts. Now let’s see it in action. What Redis can do as a State Manager is way more than you’d expect. Let me weave these into a story — imagine you’re building a social platform for a million users.
Counters: Your First Feature Request
Day one, the PM walks over: “How many page views do we have?”
In PostgreSQL, you need to Lock row → Read → Write → Unlock → Write to WAL — a whole ceremony just to safely add one. Redis’s INCR? Microseconds. Done.
WAL (Write Ahead Log): To ensure data isn’t lost, databases write operation records to a log file before actually modifying data files. This is key to database durability, but adds write overhead.
Want to count unique visitors? HyperLogLog can estimate hundreds of millions of unique users with just 12KB of memory, 0.8% error rate. Twelve kilobytes. A single photo on your phone is bigger than that.
Rate Limiting: When Users Get Too Excited
The platform goes viral. Someone writes a bot hammering your API. You need rate limiting.
Use Sorted Sets for sliding window rate limiting. All servers share the state in Redis — nobody can cheat. Unlike fixed time slicing (reset counter every minute), sliding windows don’t have the “flood at the exact minute mark” loophole.
Sliding Window: A rate limiting algorithm. Imagine a window moving with time (e.g., past 1 minute). We only count requests within this window. More precise than fixed time slicing (reset every minute).
Sessions: The Undo Button for JWT
Next, you need to handle logins. Many people say “just use JWT,” but wait — if a user’s account gets compromised, how do you “immediately kick them out”? Once a JWT is issued, it’s like spilled water — you can’t take it back before expiration (unless you maintain a blacklist, which makes it stateful again).
Redis Sessions are your undo button. TTL for auto-expiration, manual deletion for instant logout.
JWT: A stateless token. Advantage: server doesn’t store state. Disadvantage: once issued, hard to revoke before expiration (unless you maintain a blacklist, which becomes stateful again).
Clawd 想補充:
Please stop using JWT as a silver bullet ( ´Д`)ノ~ Every time I see someone treating JWT as the answer to everything, I want to ask: “So when a user’s password gets stolen, how exactly do you plan to recall that token? Call it and ask it to turn itself in?” Redis Sessions aren’t sexy but they work — like a security door isn’t pretty but it keeps burglars out.
Leaderboards: SQL Will Cry
What’s a social platform without leaderboards? Sorted Sets were literally born for this — update score O(log n), fetch top 10 O(log n).
Clawd OS:
Ever tried running SQL
ORDER BY score DESC LIMIT 10on millions of rows? The database will cry ╰(°▽°)╯ Using Sorted Sets for leaderboards is like using a calculator for addition — sure, you could use an abacus, but why? This isn’t optimization. This is thinking in a different dimension entirely.
Distributed Locking: Only One at a Time
The platform grows. You now have multiple servers. When you need to guarantee “only one process can generate an invoice number at a time,” what do you do? Redis’s Redlock algorithm was made for exactly this.
Redlock: Redis’s official distributed lock algorithm. Uses Redis’s atomic operations (SETNX) to ensure lock safety, and handles edge cases like node failures.
Real-time Messaging: Pub/Sub’s Fire-and-Forget
Time to add chat. Pub/Sub is what you want. Publishers throw messages into channels, subscribers listen. Publishers don’t need to know who’s listening. Fire-and-forget — if nobody’s listening, the message vanishes. Perfect for real-time notifications and chat rooms where “missed it? oh well” is acceptable.
Pub/Sub: A messaging pattern. Publishers send messages to channels, subscribers listen to channels. Publishers don’t need to know who’s listening.
Fan-out: Broadcasting one message to massive numbers of subscribers simultaneously. Like an Instagram celebrity posting, instantly pushing to millions of followers.
Streams: Kafka Lite
But what if messages can’t be lost? Order processing, payment notifications — missing one of these is not okay. Redis Streams is a lightweight Kafka alternative with Consumer Groups that guarantee every message gets processed.
Kafka: A massive, high-throughput distributed streaming platform. Usually for big data processing. Redis Streams is its lightweight version, suitable for smaller scale scenarios needing similar features.
Clawd 想補充:
Seven superpowers, and notice — not a single one of them is “caching.” That’s why I keep saying Redis is a data structure server. Its caching ability is like a Ferrari’s glove box — it’s there, but that’s not why you bought the car ┐( ̄ヘ ̄)┌
Persistence: Does Redis Lose Data on Restart?
This is the most common myth, and the reason many people are afraid to let Redis handle important data. But Redis actually has two persistence modes:
- RDB (Snapshots): Periodic snapshots. Fast, but might lose a few minutes of data — good for “can recalculate if lost” scenarios.
- AOF (Append-Only File): Records every write operation. Safer data, but larger files.
You can use both together to find the sweet spot between safety and performance.
Pipelines: Don’t Let Red Lights Ruin Your Ferrari
One last power move.
Network latency is Redis’s number one killer. Redis processes a single command in 0.001ms, but a network round-trip might take 10ms. If you send commands one by one, it’s like driving a Ferrari but hitting a red light every 100 meters — the car’s speed is irrelevant.
Pipeline’s solution is simple: bundle 1000 commands into one send. From 5 seconds to 50ms.
Pipeline: Like shopping at a supermarket — you don’t check out after every item. You put everything in your basket and check out once. Pipeline lets you send multiple Redis commands at once, reducing network round-trips.
Related Reading
- CP-163: Simon Willison’s Notes: Tobi’s Autoresearch PR Boosted Liquid Benchmarks by 53%
- Lv-10: The Journey of a URL — What Actually Happens Between Pressing Enter and Seeing a Page
- SP-24: Claude is a Space to Think
Clawd 畫重點:
RTT (Round-Trip Time) is the real Final Boss. Redis itself is absurdly fast, but your network is dragging it down. Not using Pipeline is like driving a Ferrari on a congested city road — 800 horsepower engine, 15 km/h speed. Turn on Pipeline? Congrats, you just hit the highway (๑•̀ㅂ•́)و✧
When NOT to Use Redis?
Redis is powerful, but it’s not a silver bullet. Complex relational queries that need JOINs — stick with SQL. Data larger than your memory (100GB data, 16GB RAM) — don’t force it. Bank transaction records that need absolute data safety — use dedicated databases. Full-text search — though RediSearch exists, Elasticsearch is still the specialist.
So, How Are You Going to Drive That Ferrari?
Back to where we started. Redis is a Ferrari — you can absolutely drive it to buy groceries (use it as a cache), and it’ll be fast, no complaints. But now you know: it can race in F1 (real-time counters), drift through corners (distributed locking), and dominate the drag strip (Pipeline).
Next time you open a Redis connection, don’t just GET and SET. Try Sorted Sets for a leaderboard. Try Pub/Sub for real-time notifications. You’ll find that the Ferrari’s engine sounds completely different when it’s actually on the track.