AI for Coding: Why Most Developers Are Getting It Wrong (And How to Get It Right)

AI for Coding: Why Most Developers Are Getting It Wrong (And How to Get It Right)

Most developers hate AI coding tools. I love them. Here's why we're talking past each other - and what you're missing if you care about actually shipping products.

I've been watching the AI for coding debate with fascination. There's this massive divide in the developer community that nobody talks about directly. On one side, you have craft-focused developers who see AI code generators as removing the satisfying parts of programming. On the other, delivery-focused developers like myself who see AI coding tools as removing tedious obstacles to shipping products.

The reality? I'm happy for all coding to be AI. I prefer delivery over the craft of writing software. This isn't about being lazy - it's about being pragmatic in a business environment where perfect is the enemy of done.

The Great Developer Divide Nobody Discusses

The best AI coding tools debate completely misses this fundamental split. Most content assumes everyone values the coding process equally, but that's not reality. The data backs this up: while 76% of developers are using or planning to use AI coding assistants, only 43% trust their accuracy - a telling gap between adoption and confidence.

In my experience leading technology teams, developers fall into two camps:

Craft-focused developers enjoy the process of writing code. They see programming as creative expression and worry that AI code writers remove the intellectually satisfying parts. For them, GitHub Copilot or Cursor feels like having someone else solve crossword puzzles for you.

Delivery-focused developers care about shipping products. We learned programming to build things, not to enjoy syntax. AI coding assistants aren't removing joy - they're removing friction between ideas and working software.

This isn't about skill level. I've built complex caching systemsdistributed testing frameworks, and managed technology at Visa. But I'd rather spend time on architecture decisions and business problems than writing HTTP handlers for the thousandth time.

The business reality backs this up. Companies don't hire engineers to enjoy coding - they hire us to deliver value. Everything else is a bonus. When AI code generators can write better boilerplate faster than humans, fighting them is fighting reality. The numbers prove it: enterprise spending on generative AI applications exploded from $600 million to $4.6 billion in 2024 - an 8x increase that shows we've moved beyond experimentation.

The Current AI Coding Tools Landscape

Before diving into implementation, let's look at what's actually available. The AI code assistant market, valued at $5.5 billion in 2024, is projected to reach $47.3 billion by 2034. This isn't hype - it's mainstream adoption driven by measurable productivity gains.

GitHub Copilot dominates with over 1.5 million active users, generating up to 46% of code in enabled files. They recently launched a free tier with 2,000 completions monthly, making it accessible for experimentation. Cursor has emerged as the serious alternative, with its valuation jumping from $400 million to $2.6 billion in just four months. Their Pro plan at $20/month offers 500 premium requests with access to multiple AI models.

But the real innovation is in autonomous coding agents. Windsurf's "Cascade" AI handles multi-file editing, while Replit Agent builds entire applications from natural language descriptions. Claude Code, which I use primarily, works more like hiring a junior developer than using autocomplete - it searches, edits, tests, and even pushes code to GitHub independently.

The pricing evolution tells the story. We've moved from simple subscriptions to usage-based billing reflecting different computational costs. Cursor's token-based pricing and unified request model shows the market maturing beyond one-size-fits-all approaches.

My 100% AI Coding Workflow (And Why It Works)

Most AI for coding content assumes a hybrid approach. That's leaving productivity on the table. I've moved to 100% AI code generation using Claude Code, and the results speak for themselves: projects that used to take 1-2 weeks now regularly finish in under a day.

Here's my actual workflow:

Voice-first development: I use SuperWhisper for voice-to-text, then speak directly to Claude Code. No IDE for initial development - just conversation. "I want to add an API endpoint that handles user authentication with JWT tokens and rate limiting." This represents a fundamental shift in how AI has transformed my daily workflow from traditional coding to AI management.

Sub-agent specialisation: Claude Code's new sub-agents feature lets me create specialised workflows. I have an onboarding sub-agent for new features, a security review sub-agent, and architecture planning sub-agents. Each handles specific contexts without cross-contamination. I've documented this approach in detail in my article on managing multiple Claude Code sessions, where I built tools like ccswitch to handle the complexity.

AI-generated complexity: I recently built a course generation platform - front-end, back-end, WebSocket integration, queue mechanisms for checkpoint recovery, cost calculations - in about 5 hours of actual work time. This would have been 30-40 hours manually. The AI handled complex state management, database optimization, and even deployment configuration.

Context management: The key insight is that context becomes less important as AI coding tools improve. Claude Code is excellent at searching and finding relevant context within a feature or workstream. I don't need to manually gather files - I just describe what I want and let it figure out implementation details.

The process looks like this:

  1. Describe the feature requirements in natural language
  2. Ask Claude to explain its implementation approach
  3. Review and choose from suggested options
  4. Let it implement while I review code in my IDE
  5. Run security and logic review with sub-agents
  6. Move to new session for next feature

The Productivity Reality vs Marketing Claims

Let me be specific about productivity gains because most content here is too vague. MIT, Princeton, and Microsoft's comprehensive study of 4,867 developers revealed a 26% increase in completed tasks and 13.5% boost in weekly code commits when using AI tools. But here's what that actually means in practice.

I've released three open source projects with timelines at least 5x faster, most times 10x. That GitPilotAI tool I built? What would have been many days of development happened in hours.

But here's what the best AI tool for coding articles don't tell you: small changes actually take longer with AI. There's overhead in context switching, explaining requirements, and reviewing output. The productivity gains come from handling complex, multi-file changes that would traditionally require hours of manual coordination.

The learning curve is also real. It takes 1-2 weeks of struggling to understand how AI coding assistants work with your workflow. It's not "pick this up and it does everything for me." You need to learn prompt engineering, some context management, and review processes. Most developers aren't willing to invest this time, so they conclude the AI code helper tools aren't worth it.

Performance varies dramatically by experience level. Junior developers see 21-40% productivity gains, while senior developers experience more modest 7-16% improvements. This suggests AI programmer tools excel at accelerating routine tasks but provide diminishing returns for complex architectural decisions.

Security in AI-Generated Code (Lessons from Production)

The security concerns around AI code generators are valid and backed by sobering data. Research shows 40% of AI-generated code contains vulnerabilities, with Python snippets showing a 29.5% weakness rate and JavaScript at 24.2%. GitHub repositories with Copilot enabled leak secrets at a rate of 6.4%, compared to 4.6% for standard repositories.

I learned this the hard way when an AI-generated Ansible script locked me out of root access because I asked it to "make it super secure" without reviewing the implementation (novice mistake). This experience taught me the importance of systematic review processes, which led me to build Cara, my AI-powered code review companion specifically to catch these types of issues.

Even more concerning: up to 30% of packages suggested by AI tools are hallucinated - they don't exist, creating opportunities for attackers to register these names with malicious code. This highlights why AI adoption in regulated industries requires careful consideration of security frameworks.

Here's my current security framework:

Mandatory review processes: Every AI-generated change gets reviewed by both me and a security sub-agent. I ask Claude to explain what it's implemented, then identify potential holes or fixes. Finally, I ask it to review its own work.

Context-aware security: For fintech work, I use Claude desktop to discuss complex financial logic implementation, then transfer that understanding to Claude Code for actual development. This separation helps catch business logic errors that pure code review might miss. I've enhanced this workflow by building memory for Claude Desktop to maintain context across sessions.

Incremental deploymentMy CI/CD approach includes automated testing of AI-generated code before deployment. The AI code writer might be fast, but it still needs proper integration testing.

Enterprise compliance considerations: For regulated industries, SOC2 Type II compliance has become baseline. The EU AI Act introduces penalties up to €35 million or 7% of global revenue for violations. GitHub Copilot Business and Enterprise now include content exclusion and policy enforcement specifically for these environments.

The key is being as specific as possible with constraints and requirements. Vague prompts lead to generic solutions that miss edge cases.

Team Reactions and the Skills Gap

Developer resistance to AI coding tools is massive and predictable. Most engineers think 100% AI code generation produces bad code and represents laziness. Even when I demonstrate 5-10x productivity improvements, the response is often that it's a "skill issue" or that the AI "doesn't give the right context."

A few engineers experiment and reach 70-80% AI assistance with good results. But most use GitHub Copilot or Cursor as an infrequent peer review tool, affecting maybe 10% of their code. This is gross underutilization driven by workflow resistance, not tool limitations.

The Stack Overflow 2024 Developer Survey reveals this divide clearly: 70% of developers don't fear AI taking their jobs, but adoption patterns show most aren't maximizing these tools. Context-heavy tasks remain challenging, with 65% of developers reporting AI misses critical context during refactoring and 60% experiencing issues during testing and code review.

The interesting pattern: non-technical stakeholders are extremely interested in AI coding productivity gains. There's a massive gap between understanding how something gets done and actually getting it done. You still need to be a programmer to direct AI effectively.

From a hiring perspective, this creates clear advantages. Everyone's looking for engineers who use AI coding assistants because those who don't are at an explicit disadvantage. It's become essential for career progression. Major enterprises report substantial gains: OCBC Bank achieved 35% productivity boost and 30% developer efficiency increase using AI coding tools.

Go Development with AI Code Generators

Working primarily in Go, I've found AI coding tools handle the language particularly well. GitHub Copilot explicitly lists Go among languages it "works especially well" with, and 70% of Go developers now use AI assistants, primarily for boilerplate generation and standard library usage.

AI excels at:

  • Generating HTTP handlers and middleware
  • Database integration with proper error handling
  • Standard concurrency patterns with goroutines
  • JSON marshaling and API responses
  • CLI applications using cobra
  • Error handling patterns following Go idioms

However, complex concurrency with advanced channel patterns still requires human oversight. My Redis and Kafka mocking approaches needed careful review of mutex usage and goroutine lifecycle management. Go's strict idioms and "one right way" philosophy often conflict with AI's pattern-based approach when dealing with sophisticated concurrent algorithms.

Recent experience shows AI tools struggle with Go's more nuanced aspects. For instance, when I was working on Go nil maps safety, the AI initially generated code that would panic in production. These edge cases require deep Go knowledge that comes from experience, not pattern matching.

The key insight: AI coding generators work best for initial generation followed by automatic go fmt formatting. Always run go vet on AI-generated code, and maintain human expertise for architecture decisions and performance-critical concurrent systems. Tools like Cursor handle large Go codebases effectively - Stream maintains over 800,000 lines of Go code using Cursor with reported 5-30x productivity gains.

Best practices for Go developers include treating AI as a powerful code generator while preserving Go's design philosophy. Use AI-generated code as a starting point, not final output, especially for anything involving complex state management or concurrent access patterns. My approach to building production-ready Go packages for LLM integration demonstrates how AI can accelerate development while maintaining Go's quality standards.

What Craft-Focused Developers Get Wrong

The biggest misconception is that delivery-focused developers don't care about code quality. We do - we just prioritize different quality metrics. I care about maintainability, security, and performance. I don't care about the satisfaction of typing every character.

Craft-focused developers are often disconnected from business reality. Everything in business is delivery-focused. You're employed to deliver value, not to enjoy syntax. When building systems at scale, what matters is whether the solution works reliably, not whether writing it felt rewarding.

The industry is moving toward AI-assisted development whether individual developers like it or not. My advice to resistant engineers: get involved because if you don't, you won't have a job. This will impact careers sooner than people think.

Recent studies highlight this urgency. A controversial MIT study found that in some cases, AI coding tools actually slow down experienced developers due to context switching overhead and review requirements. However, this reflects poor tool adoption rather than fundamental limitations. Teams that invest in proper AI coding assistant training see 3x better adoption rates and sustained productivity gains.

The Future of AI Coding Tools

In the coming years, engineers entering the workforce will get the same enjoyment from AI-assisted delivery that current developers get from manual coding. It's a different skill set focused on problem decomposition, requirement specification, and architectural thinking rather than syntax manipulation.

The trend toward terminal-based AI interfaces reflects this shift from assistive to autonomous development. Industry experts predict 95% of LLM interaction will happen through terminals rather than IDEs. This evolution enables AI agents to work in parallel on assigned tasks while developers focus on high-value architectural and business decisions.

Recent developments support this trajectory. GitHub Copilot launched multi-model support including the Claude suite as well as OpenAI. Anthropic introduced Claude Code as their first truly autonomous coding agent. Windsurf emerged with advanced multi-file editing capabilities. The innovation pace suggests we're approaching a tipping point where AI programming becomes the default approach.

This aligns with my philosophy around AI-first software development, where the development process itself gets reimagined around AI capabilities rather than simply adding AI to existing workflows. The rise of AI operating systems suggests we're moving toward environments where AI handles increasingly complex development tasks autonomously.

The misconception that AI coding assistants require no skill is dangerous. Tools like Cursor, GitHub Copilot, and Claude Code have steep learning curves. You need to understand how to work with them effectively, how to get optimal results, and how to catch their mistakes. This takes experimentation and deliberate practice. Teams need an average of 11 weeks to fully realize AI tool benefits, suggesting patience during initial adoption.

Context window sizes have expanded dramatically - Cursor offers 1M+ token context windows enabling understanding of entire codebases. Multi-model strategies have become standard, with platforms offering choice between different AI models for specific tasks. Response times continue improving - Cursor's autocomplete averages 320ms versus GitHub Copilot's 890ms.

What's missing from current AI for coding content is honesty about this learning investment. Most articles present AI coding tools as magic solutions rather than powerful tools requiring skill development.

Making the Transition to AI-First Development

If you're ready to move beyond using AI coding assistants as glorified autocomplete, here's my recommended approach:

Start with isolated features: Pick a self-contained feature for your first 100% AI implementation. Learn the workflow without risking critical systems. Begin with tasks AI handles well: API endpoints, database models, test scaffolding, and documentation generation.

Invest in prompt engineering: Spend time learning how your chosen AI code generator responds to different instruction styles. Specificity and context matter enormously. Teams with structured prompt engineering education see 3x better adoption rates than those using ad-hoc approaches.

Build review processes: Develop systematic approaches for reviewing AI output. My security sub-agent workflow catches issues I'd miss in manual review. Integrate automated scanning tools and establish approval processes for AI-generated code focusing on security vulnerabilities.

Measure actual productivity: Track concrete metrics like feature completion time, not just perceived efficiency. The gains might surprise you. Monitor not just speed but also code quality metrics, bug rates, and long-term maintenance costs.

Accept the workflow change: This isn't about adding AI to existing development processes. It's about rebuilding your development workflow around AI capabilities. Consider phased rollout starting with pilot programs on low-risk projects.

Understand the economics: Enterprise tiers range from $19-39 per user monthly. Factor hosting costs and usage-based billing into your budget. Some Cursor users reported billing surprises after pricing model changes, so monitor usage carefully.

The resistance will come from within your team and possibly from yourself. That's normal. Every major development tool shift - from assembly to high-level languages, from manual deployment to CI/CD - faced similar resistance. The key is acknowledging the trade-offs while focusing on measurable business outcomes.

The Bottom Line

AI for coding isn't about replacing developers - it's about amplifying delivery-focused developers while challenging craft-focused ones to adapt. The tools aren't perfect, but they're good enough to provide massive productivity gains when used correctly.

I've moved to 100% AI code generation because it aligns with my priorities: shipping reliable software quickly while maintaining quality through proper review processes. The systems I've built using this approach work in production and get built faster than traditional methods.

The question isn't whether AI coding tools are ready - it's whether you're ready to change how you work. For delivery-focused developers, the answer should be obvious. The best AI coding assistant is the one that gets out of your way and lets you focus on solving business problems rather than wrestling with syntax.

The future belongs to developers who can effectively direct AI to implement their ideas. Learning this skill now isn't optional - it's essential for staying relevant in a rapidly changing industry.


Need help with your business?

Enjoyed this post? I help companies navigate AI implementation, fintech architecture, and technical strategy. Whether you're scaling engineering teams or building AI-powered products, I'd love to discuss your challenges.

Learn more about how I can support you.

Subscribe

Get new posts directly to your inbox
You've successfully subscribed to Kyle Redelinghuys
Great! Next, complete checkout to get full access to all premium content.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.