A year ago, I started using AI coding tools seriously — not just kicking tyres with toy projects, but leaning on them day-to-day in production codebases. Copilot, Cursor, Claude, ChatGPT — I've run the gauntlet. Here's what I've actually learned.
There's no shortage of hot takes online. One camp says AI will replace developers within five years. The other insists it's just a fancy autocomplete. Neither is right.
The honest answer is more boring: AI coding tools are genuinely useful, but they don't write your software for you. They shift the bottleneck. Instead of spending time typing out boilerplate, you spend time reviewing, verifying, and steering. At least before fully autonomous agents become mainstream, I believe the work changes shape — it doesn't disappear.
This is where AI shines brightest. Writing a new API route that follows the same pattern as twenty others? Generating TypeScript types from a schema? Setting up test fixtures? These are tasks where the "what" is obvious and the "how" is just mechanical. AI handles them well.
Recently, I've been rapidly building entire websites and tools by leveraging agent skills effectively. The key insight is that if you can articulate your ideas clearly and precisely, you can direct AI to complete substantial projects in remarkably little time.
For example, I built and open-sourced DevPilot — a CLI toolkit that turns markdown plans into shipped code by orchestrating task sources (Trello or GitHub Issues), Claude Code as the AI coding agent, and standard Git/GitHub workflows. It autonomously creates branches, writes code, opens PRs, runs AI code review, and auto-merges. I didn't write a single line of code by hand in that project. Beyond coding, DevPilot also makes it easier to use AI for everyday tasks — like scanning my inbox for unread emails and generating a detailed summary, or sending briefings and search results to me via Slack.
When I need to work with a library I haven't touched before — say, configuring a complex AWS CDK stack or writing a tricky SQL migration — AI is a great sparring partner. It's faster than sifting through documentation, and the back-and-forth conversation helps me build a mental model quickly.
I think of it like pair programming with someone who has read every doc page but has never shipped anything. They can tell you what's possible, but you need to decide what's right.
I've started asking AI to review my own code before pushing. It catches things I miss — unused imports, inconsistent error handling patterns, edge cases I hadn't considered. It's not a replacement for human review, but it's a useful first pass.
For refactoring, AI is surprisingly good at taking a chunk of messy code and proposing a cleaner structure. I don't always accept its suggestions wholesale, but it often gives me a direction I hadn't considered.
AI doesn't understand your system. It doesn't know that your team decided to avoid GraphQL because of a bad experience two years ago, or that your deployment pipeline can't handle long-running builds. When you ask it to design a system, it gives you a textbook answer — technically correct, contextually useless.
Architecture is about trade-offs shaped by constraints that live in people's heads, not in code. AI can't access that knowledge.
When something goes wrong in a non-obvious way — a race condition, a caching issue, a timezone edge case — AI struggles. It'll confidently suggest fixes that address the symptoms but miss the root cause. I've learned to be especially cautious when AI says "the fix is simple" for a bug I've been staring at for an hour.
Design decisions, naming conventions, API ergonomics, user experience — these all require judgement that AI doesn't have. It can produce something functional, but "functional" and "good" aren't the same thing.
I've seen teams accept AI-generated code that technically works but reads like it was written by committee. No consistent voice, no clear patterns, just a pile of working code. That's a maintenance nightmare waiting to happen.
The biggest shift isn't in what I produce — it's in how I think about my time.
Before AI tools, I'd estimate a task and mentally allocate time for implementation. Now, I allocate less time for the initial draft and more time for review and testing. The ratio has flipped from maybe 70/30 (writing/reviewing) to closer to 40/60.
I also write more throwaway code. If I'm exploring an approach and I'm not sure it'll work, I'll let AI generate a quick prototype. If it's a dead end, I've lost ten minutes instead of two hours. This has made me more willing to experiment.
My daily setup looks something like this:
There's a skill gap forming, and it's not the one people expect.
Junior developers who lean too heavily on AI risk never building the debugging intuition that comes from struggling with a problem. When you've spent two hours tracking down a null pointer, you develop a sixth sense for where things go wrong. AI short-circuits that learning process.
On the other hand, senior developers who refuse to adopt AI tools are leaving productivity on the table. The tool doesn't diminish your expertise — it amplifies it. The more you know, the better you can prompt, evaluate, and integrate AI output.
The developers who'll thrive are the ones who can do both: solve hard problems from first principles and leverage AI for everything else.
I don't think AI will replace developers. I think it will raise the bar for what one developer can accomplish. A solo developer with good AI tools today can ship what a small team could three years ago. That's not a threat — it's an opportunity.
What I am cautious about is the commoditisation of "basic" development work. If AI can scaffold a standard web app in minutes, the value shifts to everything around the code: understanding the problem, designing the right solution, and maintaining it over time.
The engineers who focus on those skills — problem framing, system thinking, communication — will do fine. The ones banking entirely on typing speed or memorising API signatures should be worried.
If you're integrating AI into your workflow, here's what I'd suggest:
AI coding tools aren't magic. They're power tools. And like any power tool, they're only as good as the person using them.
Written by Siyu Qian — Senior Full-Stack Developer in Auckland, New Zealand.
© 2026 Siyu Qian. All rights reserved.