codeintelligently
Back to posts
AI & Code Quality

What Happens to Code Quality When Engineers Stop Typing

Vaibhav Verma
10 min read
aideveloper-skillscode-qualityengineering-leadershipdeskillingcareer-development

What Happens to Code Quality When Engineers Stop Typing

I've been building software for 15 years. For the last 2, I've watched something happen to my own skills that scared me enough to measure it.

In January 2025, I turned off Copilot for a week. I wanted to write a simple REST endpoint from scratch. Something I'd done hundreds of times. And I fumbled. Not on the hard parts. On the easy stuff. I couldn't remember the exact Express middleware signature. I had to look up how to set CORS headers. Things I used to type from muscle memory.

That week changed how I think about AI coding tools. Not because AI is bad, but because I realized nobody's talking about the second-order effects on the people using them.

The Deskilling Hypothesis

Here's my contrarian take: AI coding assistants are making individual engineers worse while making teams more productive. That's not a contradiction. It's a trade-off we're making without acknowledging it.

I surveyed 83 engineers at 4 companies who'd been using AI assistants daily for over 12 months. The results were consistent:

Skill Area Self-Reported Decline Measured Decline
API recall (standard library) 67% reported decline 41% slower without AI
Debugging without AI 54% reported decline 38% more time to resolution
Architecture design 23% reported decline Not measurable short-term
Code reading comprehension 31% reported decline 22% lower accuracy on review tests
Algorithm implementation 58% reported decline 44% more errors on whiteboard tasks

The measured numbers are lower than self-reported because people tend to overestimate decline. But a 41% slowdown in API recall without AI assistance is significant. These engineers aren't junior developers. Average experience was 7.3 years.

The Muscle Memory Problem

Programming has always had a physical component. Your fingers learn patterns. You type const [state, setState] = useState without thinking. That muscle memory isn't just convenience. It's connected to deeper understanding.

When I type a useEffect cleanup function, my fingers remind my brain about memory leaks. When AI generates it, that cognitive connection doesn't fire. I accept the code, it works, and my brain never engaged with the why behind the pattern.

Cognitive science calls this the "generation effect." Information you generate yourself is remembered better than information you passively receive. Every time AI writes code for you, you're choosing passive reception over active generation.

The Skills That Are Actually Declining

Not every skill degrades equally. After tracking this for 14 months across my team, here's what I've found:

Skills that decline fast (3-6 months):

  • Standard library knowledge
  • Boilerplate patterns (CRUD, auth flows, API routes)
  • Error message interpretation
  • Build configuration
  • Regex writing

Skills that decline slowly (12+ months):

  • Debugging complex issues
  • Performance optimization intuition
  • Security threat modeling
  • Code review depth
  • Architecture evaluation

Skills that don't decline (or improve):

  • High-level system design
  • Requirements analysis
  • Team communication about code
  • Tool evaluation and selection
  • Problem decomposition

The pattern is clear. Mechanical skills decay fast. Judgment skills hold steady. The question is whether you can maintain judgment without the mechanical foundation.

The Judgment Gap

Here's where it gets concerning. I believe mechanical skills are the foundation for judgment skills. You can't evaluate AI-generated code effectively if you've forgotten what good code feels like to write.

I tested this with my team. I gave 12 engineers a code review exercise: 5 AI-generated functions, each with a subtle bug. Engineers who still wrote code manually at least 30% of the time found an average of 3.8 bugs. Engineers who used AI for 90%+ of their coding found an average of 2.1 bugs.

The difference wasn't intelligence. It was pattern recognition built on practice.

The Contrarian Framework: The 70/30 Rule

Most teams I talk to are pushing toward maximum AI usage. More generation, less typing. I think that's wrong. I've implemented what I call the 70/30 Rule, and it's produced measurably better outcomes.

The 70/30 Rule:

  • 70% of code can be AI-assisted (generated, completed, suggested)
  • 30% of code must be written manually, without AI assistance

How to implement it:

  1. Designate "manual coding" days. We do Tuesdays and Thursday mornings. AI tools are turned off. Engineers write code the old-fashioned way.

  2. Rotate the manual components. One sprint, you manually write all error handling. Next sprint, all database queries. This keeps different skill areas sharp.

  3. Use manual coding for the hard parts. Security-sensitive code, complex algorithms, and architectural decisions should always be manual-first. Let AI help with the repetitive parts.

  4. Track both metrics. Measure team velocity with AI and individual capability without it. Both matter.

Results after 6 months of 70/30:

Metric Before 70/30 After 70/30
Team velocity (story points) 42/sprint 38/sprint
Bug escape rate 4.2/sprint 2.1/sprint
Code review effectiveness 61% issue detection 79% issue detection
Engineer confidence (self-reported) 3.2/5 4.1/5
Production incidents 2.8/month 1.3/month

Yes, velocity dropped 9.5%. But bugs dropped 50% and production incidents dropped 54%. That's a net win by any measure.

What About Junior Engineers?

This is where I worry most. Senior engineers at least have a foundation of skills built before AI. Junior engineers starting their careers with AI assistants might never build that foundation.

I hired 3 junior developers in 2025. One had access to AI from day one. The other two had a 90-day "no AI" onboarding period where they wrote everything manually.

After 6 months:

  • The AI-from-day-one developer ships features faster but struggles to debug without assistance. When the AI is wrong, they can't tell.
  • The manual-first developers ship slightly slower but catch more bugs in review and can work through outages when AI tools are down.

My new policy: all junior engineers get 90 days without AI tools. It's not punishment. It's building the foundation that makes them effective AI users later.

The Skills You Should Deliberately Practice

If you're worried about deskilling, here's my maintenance checklist. Spend 2-3 hours per week on these:

Weekly skill maintenance:

  1. Read source code without AI explanation. Pick an open source library you use. Read the actual implementation. No AI summary.

  2. Debug something the hard way. When you hit a bug, try solving it with just logs and a debugger before asking AI for help.

  3. Write one complete feature manually. End to end. Route handler, business logic, database query, tests. All typed by you.

  4. Review code without AI assistance. Don't use AI to summarize PRs. Read the diff yourself.

  5. Implement one algorithm from memory. Binary search, BFS, merge sort. Whatever. Keep the fundamentals fresh.

The Uncomfortable Question

Here's the question I keep coming back to: are we training a generation of developers who can't develop without AI? And if so, what happens during the next major AI outage, or the next paradigm shift that makes current AI tools obsolete?

I don't have a definitive answer. But I know that the engineers on my team who maintain manual coding skills are better at everything, including using AI. The 70/30 Rule isn't about rejecting AI. It's about making sure the humans in the loop are actually capable of being in the loop.

The best AI-assisted code comes from engineers who could write it themselves but choose to use AI for speed. The worst comes from engineers who couldn't write it without AI and don't know enough to spot when the AI gets it wrong.

Invest in your skills. The AI will still be there when you need it.

$ ls ./related

Explore by topic