Before we dive in: I share practical insights like this weekly. Join developers and founders getting my newsletter with real solutions to engineering and business challenges.
Everyone says AI will replace developers. The truth is the opposite: AI makes being a skilled developer more valuable, not less.
I've been working on a fairly complex program recently - thousands of lines of code with intricate integration patterns and workflows spread across multiple systems. The type of codebase where if you look at one part in isolation, you might not understand the inputs or outputs without grasping how data flows through the entire system. It's exactly the kind of project where Andrej Karpathy's "vibe coding" approach - where you "forget that the code even exists" - would be disastrous.
Here's what I discovered: when I spin up a new Claude Code session on this complex system, it often suggests completely wrong approaches. Not just suboptimal - actually wrong. It doesn't accurately understand the data flows, misses crucial integration points, and suggests patterns that would break existing functionality. And this is with one of the most sophisticated AI coding tools available.
The vibe coding trend promises to "democratise development" by letting anyone generate working software through natural language prompts. But there's a crucial paradox everyone's missing: AI coding tools work best for experienced developers and worst for beginners. The very people who need the most help get the least value, while those who already know what they're doing get massive productivity gains.
The distinction isn't just academic - it's the difference between shipping production-ready systems and creating technical debt disasters.
The Skill Paradox Nobody Discusses
The false promise of vibe coding is seductive: write what you want in plain English, let AI generate the code, and deploy without understanding the implementation details. It's marketed as making development accessible to everyone, regardless of technical background.
The reality is more complex. What I've observed - both in my own work and across the industry - is that vibe coding creates two very different outcomes depending on who's using it.
Vibe coding workflow: Natural language → AI generates → Accept without review → Deploy Skilled AI development: Natural language → AI generates → Expert review → Architecture alignment → Production-ready code
The difference isn't just about code quality - it's about understanding context. When you're working with complex systems, especially in financial services where bugs can cost millions, you need to understand how new code integrates with existing patterns, security models, and data flows.
I've seen vibe-coded applications with exposed API keys on the frontend, hardcoded credentials, and security patterns that would make any CISO weep. The developers didn't know these were problems because they lacked the architectural understanding to spot them. The AI generated plausible-looking code, but plausible isn't the same as correct.
The Experience Amplification Effect
Here's where the paradox becomes clear: AI coding tools like GitHub Copilot and Claude Code work brilliantly for experienced developers because we can guide them through existing codebases and spot the subtle issues they miss.
Pattern recognition is the key differentiator. When I see AI-generated Go code, I immediately notice if it's using dangerous nil map patterns or inappropriate concurrency approaches. I know what good code looks like because I've written thousands of lines and debugged countless production issues.
Architecture intuition lets experienced developers direct AI towards sound design decisions. When working on financial systems, I can guide the AI to follow established patterns for handling monetary calculations, audit trails, and compliance requirements. A vibe coder might accept whatever the AI suggests, not realising they've just introduced rounding errors or audit gaps.
Security mindset is perhaps most crucial. In fintech, where I've spent most of my career, security isn't an afterthought - it's built into every architectural decision. When AI suggests an authentication pattern, I can immediately assess whether it follows zero-trust principles, handles token expiration correctly, and integrates with existing authorisation systems.
The most telling insight came from Miguel Grinberg's observation: "It takes me the same time to review code as to write it." But here's the thing - experienced developers can review faster because we know what to look for. We've developed mental checklists from years of debugging production issues.
When I'm using my AI-powered code review setup, I'm not just checking syntax. I'm validating business logic, ensuring performance characteristics align with system requirements, and verifying that error handling follows established patterns. This isn't something you can learn from a few prompt engineering tutorials.
The Contextual Understanding Gap
This is the aspect of skilled AI development that vibe coding advocates completely miss: system knowledge trumps prompt skills every time.
You can't just tell AI to "add authentication to this endpoint" without understanding how your existing authentication middleware works, where session management lives, what error handling patterns the codebase uses, and how your logging and monitoring systems expect things to be structured.
Here's my actual workflow when using Claude Code or ccswitch to manage multiple coding sessions:
- AI suggests something plausible (but often wrong for my specific system)
- I redirect based on codebase context: "No, authentication in this codebase works through middleware chains, not inline validation"
- I guide AI through existing patterns: "Look at how we handle JWT validation in auth.go, then apply that same pattern here"
- AI generates code that actually fits the existing architecture
- Result: Code that integrates properly instead of creating architectural inconsistencies
This contextual direction requires deep familiarity with the codebase - something that only comes from either writing it yourself or spending significant time understanding how different components interact.
When I'm working on complex financial systems, I need to direct AI through established patterns for monetary calculations, transaction handling, and audit logging. The AI doesn't inherently know that we use specific decimal libraries to avoid floating-point errors, or that every state change needs to be auditable for compliance reasons.
Codebase literacy becomes more valuable with AI, not less. The skill isn't writing perfect prompts - it's reading and understanding large systems quickly enough to guide AI through them effectively.
The Integration Reality
One of the biggest gaps I've observed is that AI generates code in isolation, but experienced developers know how to make it fit into existing systems seamlessly.
Take a recent example from my work: I needed to add a new payment processing endpoint to a complex financial system. The AI's first suggestion was a standalone HTTP handler with inline validation and direct database access. Technically correct, but completely inconsistent with our established patterns.
Our system uses middleware chains for validation, service layers for business logic, and repository patterns for data access. More importantly, it follows specific patterns for handling financial transactions that ensure consistency and auditability.
An experienced developer immediately recognises these architectural mismatches. A vibe coder might accept the standalone handler, creating a maintenance nightmare where different endpoints follow completely different patterns.
This is why understanding system architecture becomes more important with AI tools, not less. The AI can generate the implementation details, but you need to direct it towards approaches that fit your existing system design.
The Vibe Coding Trap for Juniors
Junior developers face a particularly insidious problem with vibe coding: they lack the pattern recognition to distinguish good AI-generated code from plausible-but-dangerous code.
When AI generates code with security vulnerabilities, performance bottlenecks, or architectural inconsistencies, junior developers often can't spot the issues. They're not familiar enough with common pitfalls to know what to look for during review.
More dangerously, vibe coding can prevent skill development. If you're always accepting AI-generated code without understanding it, you never learn to recognise good patterns yourself. You become dependent on AI without developing the judgment to guide it effectively.
The data supports this concern: studies show that AI-generated code contains vulnerabilities 40% of the time, with accuracy rates of 65% for ChatGPT and 46% for Copilot on complex programming tasks. Without the experience to catch these issues, junior developers are essentially deploying untested code to production.
I've seen startups hit scaling walls when their vibe-coded MVPs needed refactoring for production loads. The original developers couldn't guide the refactoring because they never understood the underlying architecture patterns.
The Business Reality Check
What companies actually need isn't just speed - it's speed combined with quality, maintainability, and security. In regulated industries like financial services, code quality isn't negotiable.
When I'm working on payment processing systems or compliance reporting, every line of code needs to be traceable and auditable. Vibe coding's "forget the code exists" approach is fundamentally incompatible with these requirements.
The hiring market reflects this reality. Companies are looking for "AI-native developers" - but they mean developers who can use AI effectively, not vibe coders who generate code without understanding it. There's a significant salary premium for developers who can review and direct AI-generated code because they deliver both speed and quality.
Enterprise spending on AI development tools has grown from £600 million to £4.6 billion, but there's also rising awareness of AI code quality issues. The sweet spot isn't replacing developer judgment - it's amplifying experienced developer productivity.
My Recommended Approach
The solution isn't choosing between AI and traditional development skills - it's combining them strategically.
Learn fundamentals first: Understand what good code looks like before trying to generate it with AI. You need pattern recognition to guide AI effectively and catch its mistakes.
Master codebase navigation: Get comfortable reading and understanding large systems quickly. This lets you direct AI through existing patterns instead of creating architectural inconsistencies.
Use AI for acceleration: Let AI handle boilerplate generation and exploration of implementation approaches, but maintain oversight of architectural decisions and integration patterns.
Build systematic review processes: Develop checklists for security, performance, and maintainability that you apply to all AI-generated code. Tools like my automated review setup can help systematise this.
Practice contextual direction: Get skilled at explaining system context to AI so it generates code that fits your existing patterns. This requires deep understanding of your codebase architecture.
Maintain accountability: Own every line that goes to production, regardless of whether you or AI wrote it. The "it's not my code" mentality is a recipe for production disasters.
In my daily workflow, AI has transformed how I work from writing code to directing and reviewing code generation. But this transformation requires more skill, not less - I need to understand systems well enough to guide AI through them effectively.
The Future Belongs to Skilled AI-Assisted Developers
Vibe coding works for prototypes and throwaway projects, but production systems need skilled oversight and contextual guidance. The opportunity isn't replacing developer judgment - it's amplifying it.
AI will continue improving, but system complexity will grow alongside it. The constant will be the need for human judgment, architectural thinking, and the ability to understand how code fits into larger systems.
The real skill gap isn't about writing better prompts - it's about understanding systems well enough to guide AI through them. Developers who can make AI understand their existing codebases will see 5-10x productivity gains. Those who rely purely on vibe coding will hit walls as soon as they need to integrate with complex existing systems.
Don't choose between AI and traditional development skills. The future belongs to developers who can make AI understand their codebase, not just developers who can describe what they want in natural language.
Want to see how I've integrated AI into my development workflow while maintaining code quality? Check out my AI-powered development process and learn how to get started the right way.