I've been using AI coding tools since GitHub Copilot launched. Started with Copilot when it was basically the only game in town - a plugin that did autocomplete reasonably well. Then Cursor came out and absolutely dominated. They were leagues ahead of everyone else in the AI coding space, and for months they were the obvious choice if you wanted to be productive with AI.
Then Claude Code launched, and I switched my entire workflow over. Not just parts of it - everything. Most people I know use multiple tools depending on the task, but I went all-in on Claude Code because it fundamentally changed how I thought about AI-assisted development. It wasn't autocomplete anymore, it was more like having a junior developer who could actually understand context and make meaningful changes.
But here's the thing about Cursor that always bothered me: it's built on VS Code. And VS Code has always felt unreasonably heavy to me. Opening it takes time. Traversing files has this noticeable lag. The whole experience feels sluggish in a way that makes me want to reach for something lighter.
I use Zed as my main editor now precisely because it doesn't have that weight. Zed was rewritten from scratch for speed - it's in Rust, optimised for performance, and you can feel the difference in every interaction. VS Code is built in JavaScript (or Electron, same difference), and no amount of optimisation can hide that fundamental reality. Even JetBrains feels faster than VS Code, which is really saying something.
So when Cursor announced their CLI tool a couple of weeks ago, I immediately gave it a shot. No IDE, no VS Code overhead, no dealing with the weight of a full editor - just a CLI tool that does what Claude Code does. And after using it extensively for the past few weeks while building AI Code Metrics and working on my meditation app, I've found myself reaching for Cursor first most of the time.
The Performance Problem That's Got Worse
Claude Code has become genuinely slow lately, and I don't know what's changed in recent updates but it's noticeable enough that I started timing things.
Startup time used to be maybe a second, tops. Now it regularly takes 5+ seconds just to get going. That's before you've done anything - just opening the tool and getting to a state where you can actually start working.
But the real problem isn't startup. It's during actual usage. You're in the middle of a generation, you realise it's going down the wrong path, so you press escape to stop it. Nothing happens. You press it again. Still nothing. Sometimes the screen does this crazy flashing thing. You wait 5-10 seconds, try to type, and your input doesn't register.
It's this weird state where the tool is clearly doing something but it's not responding to you anymore. You're just stuck waiting, wondering if it's going to recover or if you need to kill the process and start over.
This has got worse over recent updates. I don't know if it's the model, the tool itself, or something about how they're handling context, but whatever it is, it's become a real friction point in my workflow.
Cursor CLI, by contrast, feels like how Claude Code used to feel. Quick startup. Responsive during generation. When you stop a generation, it actually stops. These are simple things that shouldn't be remarkable, but when you're comparing them directly, the difference is obvious.
I gave it a practical test early on: I asked it to understand what was happening across a couple of Go packages and replay its understanding back to me. This is something I do regularly when working with unfamiliar code - get the AI to explain the data flow and logic before making changes.
With Claude Code, this would normally take maybe two minutes. With Cursor using Grok, it took about 20-something seconds.
That's not a small difference when you're iterating. It's the difference between staying in flow and getting pulled out of it while you wait for responses.
