Before we dive in: I share practical insights like this weekly. Join developers and founders getting my newsletter with real solutions to engineering and business challenges.
I've been using AI coding tools since GitHub Copilot launched. Started with Copilot when it was basically the only game in town - a plugin that did autocomplete reasonably well. Then Cursor came out and absolutely dominated. They were leagues ahead of everyone else in the AI coding space, and for months they were the obvious choice if you wanted to be productive with AI.
Then Claude Code launched, and I switched my entire workflow over. Not just parts of it - everything. Most people I know use multiple tools depending on the task, but I went all-in on Claude Code because it fundamentally changed how I thought about AI-assisted development. It wasn't autocomplete anymore, it was more like having a junior developer who could actually understand context and make meaningful changes.
But here's the thing about Cursor that always bothered me: it's built on VS Code. And VS Code has always felt unreasonably heavy to me. Opening it takes time. Traversing files has this noticeable lag. The whole experience feels sluggish in a way that makes me want to reach for something lighter.
I use Zed as my main editor now precisely because it doesn't have that weight. Zed was rewritten from scratch for speed - it's in Rust, optimised for performance, and you can feel the difference in every interaction. VS Code is built in JavaScript (or Electron, same difference), and no amount of optimisation can hide that fundamental reality. Even JetBrains feels faster than VS Code, which is really saying something.
So when Cursor announced their CLI tool a couple of weeks ago, I immediately gave it a shot. No IDE, no VS Code overhead, no dealing with the weight of a full editor - just a CLI tool that does what Claude Code does. And after using it extensively for the past few weeks while building AI Code Metrics and working on my meditation app, I've found myself reaching for Cursor first most of the time.
The Performance Problem That's Got Worse
Claude Code has become genuinely slow lately, and I don't know what's changed in recent updates but it's noticeable enough that I started timing things.
Startup time used to be maybe a second, tops. Now it regularly takes 5+ seconds just to get going. That's before you've done anything - just opening the tool and getting to a state where you can actually start working.
But the real problem isn't startup. It's during actual usage. You're in the middle of a generation, you realise it's going down the wrong path, so you press escape to stop it. Nothing happens. You press it again. Still nothing. Sometimes the screen does this crazy flashing thing. You wait 5-10 seconds, try to type, and your input doesn't register.
It's this weird state where the tool is clearly doing something but it's not responding to you anymore. You're just stuck waiting, wondering if it's going to recover or if you need to kill the process and start over.
This has got worse over recent updates. I don't know if it's the model, the tool itself, or something about how they're handling context, but whatever it is, it's become a real friction point in my workflow.
Cursor CLI, by contrast, feels like how Claude Code used to feel. Quick startup. Responsive during generation. When you stop a generation, it actually stops. These are simple things that shouldn't be remarkable, but when you're comparing them directly, the difference is obvious.
I gave it a practical test early on: I asked it to understand what was happening across a couple of Go packages and replay its understanding back to me. This is something I do regularly when working with unfamiliar code - get the AI to explain the data flow and logic before making changes.
With Claude Code, this would normally take maybe two minutes. With Cursor using Grok, it took about 20-something seconds.
That's not a small difference when you're iterating. It's the difference between staying in flow and getting pulled out of it while you wait for responses.
Why I Actually Wanted to Try Cursor Again
The main reason I wanted to give Cursor another shot wasn't just the CLI announcement - it was Grok.
I'd heard Grok's code model was fast and decent, and the pricing is extremely cheap compared to Claude. So when Cursor CLI launched, I saw it as an opportunity to try Grok in a proper coding environment without dealing with the IDE overhead.
Turns out Grok code fast is actually very good.
I've been using it almost exclusively for the past couple of weeks. Debugging specific functions, understanding system architecture, looking at performance issues, generating new features - it handles all of this well. Not perfectly, but well enough that the speed and cost trade-off makes sense for probably 80-90% of what I do.
Does it fail sometimes? Absolutely. There are tasks where it struggles and I need to switch to Claude. Complex architectural decisions, subtle concurrency patterns in Go, anything where the context is spread across multiple systems in non-obvious ways - Grok can get lost on these.
But those situations are rare enough that I'm willing to deal with them. For the bulk of development work - implementing features, fixing bugs, refactoring code, writing tests - Grok is more than capable.
The pricing matters too. I'm on Cursor's Pro plan now, which is $70/month (one tier up from the base Pro), and even with hours of daily usage over many days, I didn't come close to hitting any limits. With Claude API rates through Claude Code, I'd be spending considerably more for the same level of usage.
This isn't about being cheap. It's about removing friction. When you're not worried about burning through tokens, you ask more questions, iterate more freely, experiment with different approaches. That freedom is worth something.
How My Workflow Has Changed (And Hasn't)
The actual workflow between Cursor CLI and Claude Code is basically identical. There's no learning curve, no adaptation period, no new patterns to memorise. You describe what you want, it generates code, you review and iterate.
But I have noticed one shift in how I interact with Grok compared to Claude: I'm more explicit with context upfront.
With Claude Code, I might say "I think it's in this file, start here and follow the flow, probably goes into these other files" and let it figure things out. Claude is good at searching and finding relevant context, so I can be relatively vague and trust it to piece things together.
With Cursor and Grok, I'm more directive: "start at these files and work from there." I point it more explicitly at the right places rather than letting it discover them.
I don't think this is a limitation of Grok so much as it's me optimising for speed. Being more explicit means it gets to the answer faster. And since Grok is genuinely fast - both the model itself and the tool's responsiveness - being verbose doesn't slow things down the way it might with other tools.
It's a small adjustment, but it's made my interactions more efficient overall. I spend less time waiting for the AI to figure out what I mean and more time reviewing the actual implementation.
The One Genuinely Annoying Limitation
There's one thing about Cursor that drives me crazy: you can't switch models mid-conversation.
If you're working through a problem and Grok isn't cutting it, you need to start a completely fresh conversation with Claude. You lose the context, the thread of what you were working on, all the back-and-forth that led to the current state.
This isn't a massive problem - it doesn't happen constantly - but when it does, it's frustrating. You're stuck on something for a few iterations, you want to try a different approach with a more capable model, and the tool forces you to start from scratch.
You end up copying and pasting context from the previous conversation, trying to reconstruct the mental model you'd built up, explaining things the new model should already know if it could just see the conversation history.
I don't understand why this isn't implemented. Keep the conversation history, swap the backend model, continue where you left off. It seems conceptually straightforward, and it would make the multi-model approach significantly more practical.
Maybe there's a technical reason I'm not seeing. Maybe different models need different context formats. But from a user perspective, it feels like an obvious feature that should exist.
Would I Switch Back If Claude Code Got Faster?
This is the question I keep asking myself: if Claude Code suddenly became as fast as Cursor tomorrow, would I switch back?
Probably not entirely.
The latest Claude Sonnet is now the best model and for complex problems, architectural decisions, anything where you need that extra bit of sophistication - Claude is still the better choice.
But even if the CLI tool itself matched Cursor's speed, the Sonnet model wouldn't be as fast as Grok. And Grok is good enough that the speed/cost trade-off makes sense for everyday work.
I'd likely end up using both: Grok through Cursor for iteration and quick tasks, Claude for complex problems that need more capability. Which is kind of annoying from a workflow perspective - more tools to manage, more context to keep track of - but it's probably the pragmatic approach.
The reality is that for someone who codes for hours every day, responsiveness matters more than I'd realised. Waiting 10 seconds here and there adds up. It breaks flow. It makes you less likely to ask follow-up questions, less likely to iterate quickly, less likely to explore alternative approaches.
You start optimising for fewer interactions rather than better solutions, which is exactly the wrong trade-off when you're trying to build something well.
The Broader Pattern I'm Seeing
What's interesting about this whole experience is how much it mirrors what I wrote about in my AI coding workflow article. The tools are evolving fast enough that what made sense six months ago might not be the best approach today.
When I initially moved to 100% AI-assisted development, Claude Code was the obvious choice. It was the most capable tool, the most reliable, the best at understanding context. The performance was good enough that it wasn't a bottleneck.
But as the tool has got heavier and the alternatives have got better, that calculation has shifted. Performance now matters enough to change which tool I reach for first, even if it means using a slightly less capable model.
This is particularly relevant when you're working on something like the vibe coding problem - where the real skill isn't writing code, it's directing AI effectively and reviewing the output. If you're spending most of your time reviewing rather than generating, having a faster generation step is a significant win.
The pattern I'm seeing across AI coding tools is this bifurcation: general-purpose tools that try to be everything to everyone (Claude Code, GitHub Copilot), and specialised tools that optimise for specific workflows (Cursor CLI, various other CLI-first tools).
For a while, the general-purpose tools were good enough. But as these tools mature, the specialised ones are starting to make more sense for people who know what they want.
What This Means for Your Workflow
If you're using Claude Code and finding it sluggish lately, I'd recommend giving Cursor CLI a shot. Setup takes minutes, it doesn't require VS Code to be open, and if you're willing to experiment with Grok, you'll likely find it handles most of what you need.
The workflow is basically identical - there's no lock-in, no switching cost, no new patterns to learn. Just faster responses and cheaper usage for everyday tasks.
And if you need Claude's sophistication for complex problems, you can still reach for it. This isn't about replacing Claude Code entirely - it's about having options depending on what you're working on.
For me, that means using Cursor for probably 80% of my work now. The things where I need Claude's extra capability are real, but they're not as frequent as I thought they'd be when I first made the switch.
The key insight from all of this: don't get too attached to any single tool. The AI coding space is moving fast enough that what works best today might not be what works best in six months. Stay flexible, experiment with new approaches, and optimise for what actually makes you more productive rather than what you think should make you more productive.
I wrote about managing multiple Claude Code sessions when I was all-in on that tool. Now I'm building different workflows around Cursor. In another six months, it might be something else entirely. That's fine - the point is shipping good code quickly, not loyalty to any particular tool.
Cursor CLI with Grok has become my default. Fast enough to stay in flow, capable enough for most tasks, cheap enough that I don't think about usage. That combination is hard to beat right now.