Most developers hate AI coding tools. I love them. Here's why we're talking past each other - and what you're missing if you care about actually shipping products.
I've been watching the AI for coding debate with fascination. There's this massive divide in the developer community that nobody talks about directly. On one side, you have craft-focused developers who see AI code generators as removing the satisfying parts of programming. On the other, delivery-focused developers like myself who see AI coding tools as removing tedious obstacles to shipping products.
The reality? I'm happy for all coding to be AI. I prefer delivery over the craft of writing software. This isn't about being lazy - it's about being pragmatic in a business environment where perfect is the enemy of done.
The Great Developer Divide Nobody Discusses
The best AI coding tools debate completely misses this fundamental split. Most content assumes everyone values the coding process equally, but that's not reality. The data backs this up: while 76% of developers are using or planning to use AI coding assistants, only 43% trust their accuracy - a telling gap between adoption and confidence.
In my experience leading technology teams, developers fall into two camps:
Craft-focused developers enjoy the process of writing code. They see programming as creative expression and worry that AI code writers remove the intellectually satisfying parts. For them, GitHub Copilot or Cursor feels like having someone else solve crossword puzzles for you.
Delivery-focused developers care about shipping products. We learned programming to build things, not to enjoy syntax. AI coding assistants aren't removing joy - they're removing friction between ideas and working software.
This isn't about skill level. I've built complex caching systems, distributed testing frameworks, and managed technology at Visa. But I'd rather spend time on architecture decisions and business problems than writing HTTP handlers for the thousandth time.
The business reality backs this up. Companies don't hire engineers to enjoy coding - they hire us to deliver value. Everything else is a bonus. When AI code generators can write better boilerplate faster than humans, fighting them is fighting reality. The numbers prove it: enterprise spending on generative AI applications exploded from $600 million to $4.6 billion in 2024 - an 8x increase that shows we've moved beyond experimentation.
The Current AI Coding Tools Landscape
Before diving into implementation, let's look at what's actually available. The AI code assistant market, valued at $5.5 billion in 2024, is projected to reach $47.3 billion by 2034. This isn't hype - it's mainstream adoption driven by measurable productivity gains.
GitHub Copilot dominates with over 1.5 million active users, generating up to 46% of code in enabled files. They recently launched a free tier with 2,000 completions monthly, making it accessible for experimentation. Cursor has emerged as the serious alternative, with its valuation jumping from $400 million to $2.6 billion in just four months. Their Pro plan at $20/month offers 500 premium requests with access to multiple AI models.
But the real innovation is in autonomous coding agents. Windsurf's "Cascade" AI handles multi-file editing, while Replit Agent builds entire applications from natural language descriptions. Claude Code, which I use primarily, works more like hiring a junior developer than using autocomplete - it searches, edits, tests, and even pushes code to GitHub independently.
The pricing evolution tells the story. We've moved from simple subscriptions to usage-based billing reflecting different computational costs. Cursor's token-based pricing and unified request model shows the market maturing beyond one-size-fits-all approaches.
My 100% AI Coding Workflow (And Why It Works)
Most AI for coding content assumes a hybrid approach. That's leaving productivity on the table. I've moved to 100% AI code generation using Claude Code, and the results speak for themselves: projects that used to take 1-2 weeks now regularly finish in under a day.
Here's my actual workflow:
Voice-first development: I use SuperWhisper for voice-to-text, then speak directly to Claude Code. No IDE for initial development - just conversation. "I want to add an API endpoint that handles user authentication with JWT tokens and rate limiting." This represents a fundamental shift in how AI has transformed my daily workflow from traditional coding to AI management.
Sub-agent specialisation: Claude Code's new sub-agents feature lets me create specialised workflows. I have an onboarding sub-agent for new features, a security review sub-agent, and architecture planning sub-agents. Each handles specific contexts without cross-contamination. I've documented this approach in detail in my article on managing multiple Claude Code sessions, where I built tools like ccswitch to handle the complexity.
AI-generated complexity: I recently built a course generation platform - front-end, back-end, WebSocket integration, queue mechanisms for checkpoint recovery, cost calculations - in about 5 hours of actual work time. This would have been 30-40 hours manually. The AI handled complex state management, database optimization, and even deployment configuration.
Context management: The key insight is that context becomes less important as AI coding tools improve. Claude Code is excellent at searching and finding relevant context within a feature or workstream. I don't need to manually gather files - I just describe what I want and let it figure out implementation details.
The process looks like this:
- Describe the feature requirements in natural language
- Ask Claude to explain its implementation approach
- Review and choose from suggested options
- Let it implement while I review code in my IDE
- Run security and logic review with sub-agents
- Move to new session for next feature
The Productivity Reality vs Marketing Claims
Let me be specific about productivity gains because most content here is too vague. MIT, Princeton, and Microsoft's comprehensive study of 4,867 developers revealed a 26% increase in completed tasks and 13.5% boost in weekly code commits when using AI tools. But here's what that actually means in practice.
I've released three open source projects with timelines at least 5x faster, most times 10x. That GitPilotAI tool I built? What would have been many days of development happened in hours.
But here's what the best AI tool for coding articles don't tell you: small changes actually take longer with AI. There's overhead in context switching, explaining requirements, and reviewing output. The productivity gains come from handling complex, multi-file changes that would traditionally require hours of manual coordination.
The learning curve is also real. It takes 1-2 weeks of struggling to understand how AI coding assistants work with your workflow. It's not "pick this up and it does everything for me." You need to learn prompt engineering, some context management, and review processes. Most developers aren't willing to invest this time, so they conclude the AI code helper tools aren't worth it.
Performance varies dramatically by experience level. Junior developers see 21-40% productivity gains, while senior developers experience more modest 7-16% improvements. This suggests AI programmer tools excel at accelerating routine tasks but provide diminishing returns for complex architectural decisions.
Security in AI-Generated Code (Lessons from Production)
The security concerns around AI code generators are valid and backed by sobering data. Research shows 40% of AI-generated code contains vulnerabilities, with Python snippets showing a 29.5% weakness rate and JavaScript at 24.2%. GitHub repositories with Copilot enabled leak secrets at a rate of 6.4%, compared to 4.6% for standard repositories.
