I've been on Claude Max 20x for a while now, paying $200/month, and at some point I realised I had no real sense of whether it was actually good value. I was using Claude Code heavily across several projects, clearly burning through a lot of tokens, but whether I'd have spent more or less on pay-as-you-go API access was completely opaque to me. Anthropic's console gives you high-level API usage figures but nothing per-project, nothing per-session, and nothing that tells you where the money actually went within a given piece of work.
The data is all there though. Claude Code writes detailed JSONL logs to ~/.claude/projects/ after every API call - every request, every token count, every model used, timestamped and structured. It just sits there, completely invisible unless you go digging through flat files manually. So I spent a weekend building cctrack, a single Go binary that parses those logs and surfaces everything in a real-time dashboard. I'll share the full code and GitHub repo later in the post, but first - the answer to my original question, because the numbers are pretty stark.

What the numbers actually showed
This month my Claude Code usage across all projects has cost $1,428.62 at API pay-as-you-go rates, across 755.7 million tokens, with the month projected to finish at $1,538.51. I'm paying $200/month for Max 20x. That's roughly seven times the subscription cost in equivalent API spend, which settles the "is this worth it" question pretty definitively. And that's Claude Code alone - Max also covers claude.ai usage, so the real value gap is even wider than that number suggests.
The cost breakdown was the first revelation. Of that $1,428 total, Cache Read accounts for $974.93 - 63% of spend. Cache Write is $542.32 (34%). Actual output tokens, the ones generating the responses I'm reading, are $64.75 (4%). Input is essentially nothing at $3.52. The overwhelming majority of what I'm spending on is caching infrastructure, not generation. Completely invisible without something surfacing it.
The model split gave me the most pause. Opus accounts for 90% of spend - $1,278.38 across 74 sessions. Haiku is $139.78 across 36 sessions, Sonnet just $2.39 across 24 sessions. cctrack's savings calculator is fairly blunt about the implication: switching my Opus sessions to Sonnet could save approximately $1,022.70 per month at API rates. I've been defaulting to Opus without much thought for most sessions, and while it genuinely is better for the complex architectural work I do, seeing that figure made me think more carefully about which sessions actually need it versus which ones I'm running on Opus out of habit. I've written about Claude Code model selection and pricing before, and this data gave me a much more grounded view on those tradeoffs than anything I had previously.
The rest of this post covers how cctrack is built in detail - the incremental JSONL parsing system, the SQLite schema, the embedded Vue SPA, the WebSocket real-time updates, and the cost-weighted token breakdown that makes the donut chart actually useful. Full source code and GitHub repo at the end.
