AI tools have fundamentally changed how I write code. Claude, ChatGPT, and Copilot are part of my daily workflow, and I use them across nearly every project I take on. They are genuinely useful. I am not here to argue otherwise.
But after more than 25 years of shipping production systems -- systems that process real transactions, manage real inventory, and run real businesses -- I have developed a specific methodology for how I use AI. It is not about enthusiasm or skepticism. It is about discipline. When a company with 200 employees depends on your integration staying up overnight, "the AI wrote it" is not an acceptable root cause analysis.
This is not a debate about whether to use AI. That debate is over. This is about how to use it responsibly when the stakes are real.
Where AI Genuinely Accelerates My Work
There are categories of work where AI saves me significant time with very low risk. I lean on it heavily for these.
Boilerplate generation. API endpoint scaffolding, database migration files, test harness setup, configuration files. These follow well-known patterns and are tedious to write by hand. AI handles them well because the patterns are thoroughly represented in its training data.
Pattern implementation. "Write a retry mechanism with exponential backoff and jitter." "Implement a circuit breaker with half-open state." AI nails these. They are defined, documented patterns with clear specifications. There is little room for ambiguity, and the output is easy to verify.
Documentation. Generating JSDoc comments, docstrings, and inline documentation from existing implementations. AI reads code accurately and describes what it does. I still review the output, but it turns a 30-minute task into a 5-minute task.
Code review assistance. "What edge cases am I missing in this webhook handler?" "Are there race conditions in this queue consumer?" AI is surprisingly good at spotting things you have been staring at too long to see. It acts as a second pair of eyes that never gets tired.
Data transformation. Complex SQL queries, data mapping functions, CSV parsing logic, JSON schema transformations. These are mechanical tasks with clear inputs and outputs. AI does them faster than I do, and the results are easy to test.
The common thread: these are high-confidence, low-risk tasks where the output is easily verifiable. If AI gets it wrong, I catch it immediately. If it gets it right, I have saved hours.
Where AI Gets Dangerous
Then there are the categories where AI generates output that looks correct but is not. These are where inexperienced developers get burned.
Architecture decisions. AI does not know your business constraints. It does not know your team has three developers, not thirty. It does not know your budget, your timeline, your compliance requirements, or the political dynamics that determine what you can actually ship. I have watched AI recommend Kubernetes for a project that needed a single VPS with a systemd service. The recommendation was technically valid. It was practically absurd.
Integration design. This is where I see the most dangerous AI output. AI does not know that the NetSuite API has a 10 requests-per-second rate limit that it enforces inconsistently. It does not know that PrestaShop webhooks fail silently under high load. It does not know that a particular payment provider returns different error formats for sandbox versus production. AI generates code that "looks right" -- correct syntax, reasonable structure, sensible error handling -- and then breaks under conditions that only show up when real data hits real APIs.
Security. AI-generated code often has subtle vulnerabilities. Missing input validation on one of five parameters. Overly permissive CORS configurations. SQL injection vulnerabilities in dynamically constructed queries that only appear when a user provides unexpected input. The code passes a basic review. It looks clean. But it has gaps that an experienced security review would catch. The problem is that the gaps are subtle enough that someone who relies too heavily on AI might not look closely enough to find them.
Production operations. "How should I handle this database migration on a live system serving 60+ client servers?" AI gives textbook answers. Add a column, backfill, drop the old one. It does not mention that you need to coordinate with the connection pool settings, that the backfill might lock the table for longer than your health check timeout, or that three of those 60 servers are running a version of the client software from 2019 that does not handle the schema change gracefully. These are the things you learn from experience, not from training data.
My Review Process for AI-Generated Code
Every piece of AI output goes through the same review I would give a junior developer's pull request. No exceptions. The fact that AI wrote it does not give it a pass. If anything, it gets more scrutiny, because AI is confidently wrong in ways that junior developers usually are not.
Step 1: Does it fit the architecture? AI does not know your system. It generates code in isolation. Does this new function fit the patterns the rest of the codebase uses? Does it respect the separation of concerns you have established? Does it use the right abstraction layer?
Step 2: Edge cases. What happens with null input? What happens on network timeout? What about concurrent requests hitting the same resource? What if the payload is 100x larger than expected? AI tends to generate happy-path code. The edge cases are where production systems actually fail.
Step 3: Security scan. Input validation on every parameter. Authentication and authorization checks in place. No data leakage in error messages. No overly permissive configurations. No hardcoded values that should be environment variables.
Step 4: Performance. Will this work at scale, or does it only work with the 10-row test dataset? Is there an N+1 query hiding in there? Is it loading an entire table into memory when it should be streaming? Does it open connections it never closes?
Step 5: Failure handling. AI generates optimistic code. It assumes the network is reliable, the database is available, and the external API returns what it should. Production systems need to handle the opposite gracefully. Retries, circuit breakers, dead letter queues, meaningful error logging -- this is the infrastructure that keeps systems running when things go wrong, and AI rarely includes it unprompted.
If the output fails any of these steps, I rewrite the problematic parts manually. Sometimes that means rewriting most of it. That is fine. AI still saved time by giving me a starting point and a structure to critique.
The Force Multiplier Effect
Here is the real value of AI for experienced engineers, and it is significant: I know what to build. AI helps me build it faster.
The architecture decision -- event-driven versus request-response, monolith versus microservices, which database, which queue, which caching strategy -- that comes from experience. From having seen systems fail in specific ways. From understanding the trade-offs that do not appear in documentation. AI cannot provide that judgment. It can summarize what others have written about trade-offs, but it cannot weigh them against the specific context of your project, your team, and your constraints.
But once those decisions are made, AI accelerates the implementation significantly. A junior developer using AI gets fast code that might be wrong in ways they cannot detect. A senior engineer using AI gets fast code that they can verify is right -- or identify exactly where it is wrong and fix it.
A concrete example: I recently built a complete webhook handler for a NetSuite-to-e-commerce integration in about 30 minutes instead of the usual two hours. The architecture decisions were mine -- event-driven processing, idempotent message handling, a dead letter queue for failed deliveries, and structured logging for debugging production issues. The implementation was accelerated by AI. And the review caught two edge cases AI missed: a race condition when duplicate webhooks arrived within the same processing window, and a missing null check on an optional field that NetSuite only populates for certain transaction types.
Without AI, the same work takes two hours. With AI but without the review, those two bugs ship to production and cause silent data inconsistencies that surface weeks later. The combination of experience and AI is where the value lies.
What This Means for Hiring Engineers
If you are hiring a developer in 2026, the question is not "do they use AI?" Everyone uses AI. The tools are too good not to. The question that actually matters is: do they know enough to catch when AI is wrong?
That requires something AI cannot provide. It requires years of watching systems fail at 2 AM and understanding why. It requires the scar tissue from a migration that took down a production database, and the judgment that prevents it from happening again. It requires knowing that the "clean" solution is sometimes the wrong solution because the legacy system it integrates with has undocumented quirks that only surface under load.
AI makes experienced engineers faster. It does not make inexperienced engineers experienced. The engineers who will be most valuable in the coming years are the ones who have deep enough knowledge to use AI as a force multiplier rather than a crutch -- who can look at AI-generated code and immediately see what is missing, what is naive, and what will break when it meets reality.
The tools will keep getting better. The judgment to use them well still has to be earned.