How AI Has Transformed My Daily Workflow: From Engineer to AI Manager

How AI Has Transformed My Daily Workflow: From Engineer to AI Manager

I've been watching my development approach shift entirely over the past two and a half years. What started with GitHub Copilot when I was building a rules engine for stocks and cryptocurrency trading has evolved into something I never expected - I'm barely coding anymore, and I'm delivering faster than ever.

It's not that I've stopped being technical. I'm just spending most of my time reviewing AI-generated code rather than writing it myself. The shift happened gradually, but looking back, it's been dramatic. I'd estimate I'm now 5-15 times faster at delivering working software, depending on the task.

The Current Toolkit

My workflow now centres around four main tools, each serving a specific purpose in what's become a surprisingly sophisticated pipeline.

Superwhisper handles all my interactions with AI when I can talk freely. Instead of typing out complex prompts, I literally speak to the AI as if I'm explaining a problem to a colleague. A typical interaction sounds like: "I need to make sure this Terraform is secure. Take a look specifically at the configuration for API gateway to Kubernetes, k8s must be in a private VPC and not accessible. Only ingress is through the gateway. I think there needs to be an ALB between them."

That natural explanation gives far more context than I'd ever type, and it's effortless. The AI gets the full picture of what I'm trying to achieve and why.

Claude Code gets used for larger changes or when I need to review an entire codebase. With the latest Sonnet and Opus models, the capability is remarkable. The key insight I've learnt is to ask Claude to plan first, make targeted changes rather than sweeping ones, and work step by step. This approach prevents the AI from making assumptions about parts of the code it can't see.

Cursor has become my primary development environment. With the new Anthropic models and recent updates, it's incredibly capable. The secret is limiting context and including only relevant files. When I'm working on a specific feature, I'll include the main file, any related configuration, and maybe a few test files. Nothing more. This keeps the AI focused and reduces hallucinations.

Task Master is an MCP server I use within Cursor for changes that involve several steps - build, test, document, deploy. When a change touches multiple parts of the system or requires a specific sequence of actions, Task Master handles the orchestration while I focus on the logic.

The Reality of AI-Assisted Development

The productivity gains are substantial, but they come with trade-offs I'm still processing. Most of my engineering skills are atrophying. Architecture decisions, implementation patterns, even syntax - it's all handled by the AI now. I've become more of a manager, closer to product than engineering.

I still do plenty of thinking upfront, but even that's becoming less critical. The AI can often figure out approaches I hadn't considered. My value now comes from hard-won insights about the problem domain and the ability to steer the solution. I might say "use Redis across multiple instances for rate limits" based on experience, but the implementation details are largely handled by AI.

Direct coding makes up maybe 5% of my time now. The rest is reviewing, and there's a lot to review because the model throughput is high even when working in small chunks. A few hours of AI-assisted work can generate what would have taken me weeks to write manually.

Where Human Oversight Still Matters

The most common issues I catch during review relate to program flows and dependencies. The AI might add a new method but forget to instantiate it when the programme starts, rendering it useless. Or it might make a change that requires updating three other components because they're linked in ways that aren't immediately obvious from the local context.

Understanding these flows and interdependencies is where human experience still matters significantly. The AI sees the code, but it doesn't always grasp the broader system behaviour. As context windows become cheaper and models more sophisticated, this gap will likely close, but for now it's where I add the most value.

The Economics of AI Development

This setup costs several hundred pounds a month, with Cursor being particularly expensive now that I'm using the Claude 4 Sonnet model. But I'm getting months of productivity out of it, so the economics work clearly in my favour.

The investment isn't just financial - there's a learning curve in understanding how to work effectively with these tools. Each AI model has its strengths and quirks. Learning to break down problems, provide the right context, and review output efficiently takes practice.

Becoming an AI Engineering Manager

The shift in my role has been profound. I've moved from being an engineer who occasionally used AI to being essentially an engineering manager for AI systems. I plan the work, provide context and direction, review output, and make sure everything fits together properly.

This parallels my experience with automating Git commits using AI - once you automate part of the development workflow, you start seeing opportunities to automate more of it. The difference is that now we're automating the core engineering work, not just the peripheral tasks.

There's something both exciting and concerning about this evolution. I'm delivering faster than ever, but I'm also losing touch with the craft of programming. Whether this matters in the long term depends on how the industry evolves. If everyone is working this way, traditional programming skills may become less relevant. If not, I might be building up a dependency that could become problematic.

Practical Takeaways

If you're considering a similar approach, start small and build up your workflow gradually. Begin with one tool that addresses your biggest pain point - for me, that was Copilot speeding up routine coding tasks. Then add complementary tools as you understand how AI fits into your process.

Focus on learning how to provide good context and review output effectively. These skills will be valuable regardless of which specific tools you use. The AI landscape changes rapidly, but the principles of clear communication and thorough review remain constant.

Most importantly, be honest about the trade-offs. You'll likely become faster at delivering software, but you may also become rusty at the fundamentals of engineering. Whether that's acceptable depends on your career goals and how you see the industry developing.

The tools are good enough now that using them isn't experimental - it's just efficient. The question isn't whether to adopt AI assistance, but how to do it thoughtfully while maintaining the skills and judgement that matter most.


Need help with your business?

Enjoyed this post? I offer consulting services to help businesses like yours tackle AI, tech strategy, and more. Learn more about how I can support you.

Subscribe

Get new posts directly to your inbox
You've successfully subscribed to Kyle Redelinghuys
Great! Next, complete checkout to get full access to all premium content.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.