Sunday, March 15, 2026

Dead Code Is a Cognitive Tax — Here's How AI Helps You Stop Paying It

Dead Code Is a Cognitive Tax — Here's How AI Helps You Stop Paying It

Every engineer knows the feeling. You open an unfamiliar part of the codebase, and you're immediately staring down a tangle of services, workers, models, and task entries — none of which come with a label saying "still matters" or "abandoned in 2023." You read the code carefully, try to trace the call graph, maybe even grep for usages — and only after 30 minutes do you realize: this thing hasn't run in production for over a year.

That tax on your attention has a name: cognitive load. And dead code is one of its most insidious sources.


What Is Cognitive Load in a Codebase?

Cognitive load, in the context of software engineering, is the total mental effort required to understand a system well enough to work in it safely. Every class, method, model, and background job you encounter is a unit of context you have to hold in your head.

The problem is that your brain doesn't automatically know which of those units are live and which are ghosts. If an EstimateWorker class exists in your repo, you have to assume it matters — until you prove otherwise. That proof takes time, attention, and often a distracting detour away from the actual work you sat down to do.

Dead code doesn't just waste disk space. It actively misleads you.

A Real-World Example: The Estimation Pipeline Cleanup

Recently, our team completed a cleanup effort across seven pull requests targeting a legacy estimation infrastructure — a suite of services originally built around Prophet forecasts and a Clair analysis pipeline — that had gone completely dark since late 2023.

Here's what was still sitting in the codebase, doing nothing:

  • EstimateService — fetched a CSV over HTTP, upserted records into the database, and refreshed an estimation cache. Silent for months.
  • EstimateWorker — a Sidekiq background job that uploaded files to S3, triggered the estimation flow, and posted Slack notifications. Long dead.
  • Estimation::Prophet::DownloadWorker — downloaded forecast CSVs from S3 and upserted them into a Prophet table. Never called.
  • Estimators::ClairAnalysis — computed hourly analysis records for a brief window in late 2023, then stopped.
  • ClairAnalysis model and its backing database table — zero writes since the pipeline went quiet.
  • Three SwitchBoard dispatch entriesevents_collect_for_next_week, generate_weekly_user_report, estimate_v2 — all orphaned task names in a routing map.

Any engineer — or AI assistant — reading this codebase would reasonably assume all of the above was active production infrastructure. None of it was.

The Numbers

7
Pull Requests
31
Files Changed
943
Lines Deleted
−816
Net Lines Removed
PRBranch+Added−DeletedFiles
#1cleanup-tasks13162
#2cleanup-unused-estimate0744
#3remove-clair-analysis03142
#4remove-prophet02105
#5remove-clair-analysis-model20573
#6rename-clair-v2s946813
#7remove-estimate-unused02042
Total12794331

The 127 additions are almost entirely the rename PR (#6) — migrations, updated references, and renamed specs. Every other PR was pure deletion.


The Cognitive Impact of the Cleanup

Cleaner model surface. Once EstimateService, EstimateWorker, and ClairAnalysis were gone, the remaining models — Clair, ClairDailyInterimResult, ClairSetting — actually reflected how the system works today.

Naming that signals intent. ClairV2 implies a versioning scheme. ClairDailyInterimResult tells you exactly what the thing is and why it exists.

A smaller SwitchBoard dispatch map. Removing the three orphaned entries made the dispatch map honest again.

A shorter test suite that still covers everything that matters. Several spec files covering deleted code were removed. The test suite got faster without losing any meaningful coverage.


Where AI Fits In: Finding Dead Code You Can't See

Here's the uncomfortable truth about dead code: it's often invisible to the people closest to it. If you wrote EstimateWorker two years ago and the team that decommissioned the upstream service never filed a ticket, you might not even know it's dead. The code looks fine. The tests pass. Nothing alerts you.

A Telling Real-World Example: Claude Gets Confused, Then Catches Itself

We recently asked Claude to generate a flow diagram of our pay guarantee process. Claude produced a diagram that looked plausible — tracing through services, models, and workers in a way that made logical sense.

The problem? Part of that diagram was wrong — because Claude had incorporated a module that was no longer active into its understanding of the flow. The dead code was so well-structured and apparently coherent that the AI read it as live infrastructure and wove it into the diagram without hesitation.

But here's what makes this story instructive rather than just cautionary: When an engineer removed this hopefully last bit of dead code, Claude immediately realized that the diagram she drew earlier relied on this bad signal, revised its understanding, and corrected the diagram.

That sequence — confidently wrong, then self-correcting — is a useful frame for thinking about AI and dead code. It fooled the AI for the same reason it fools engineers: it looks like it belongs.

What AI Can Do

Tracing call graphs at scale. AI can trace the full call graph of a function or class across an entire monorepo — answering not just with direct callers, but with the absence of callers.

Cross-referencing runtime signals with static code. When connected to observability data — logs, APM traces, queue metrics — an AI can compare what the code says it does with what actually runs in production.

Flagging stale patterns. Dead code has fingerprints: models with no recent migrations, task names absent from any scheduler config, service classes with no callers outside their own spec files.

Drafting cleanup PRs. Once dead code is identified, AI can help draft the actual removal — proposing what to delete, what to rename, and what specs to clean up.

What AI Can't Do (Yet)

AI isn't a replacement for engineering judgment. A worker might be "dead" in CI but still referenced by a cron job in an ops runbook nobody's touched in three years.

The right model is AI as a scout, engineer as the decision-maker. AI surfaces candidates. Engineers verify, contextualise, and own the deletion.

Making Dead Code Cleanup a Habit

  1. Timestamp your decommissions. When you turn off a pipeline, leave a comment in the code with the date.
  2. Review your task dispatch maps regularly. A quarterly review catches orphaned entries before they fossilise.
  3. Use AI during onboarding and code review. AI tools can help new engineers quickly validate whether something is live — and surface it for cleanup if it isn't.
  4. Treat deletion as a first-class deliverable. 816 lines removed is a meaningful engineering contribution. Make it visible in sprint planning, changelogs, and retros.

Conclusion

Large codebases accumulate cognitive debt quietly, continuously, and with compounding interest. Dead code is one of the most expensive line items: it misleads engineers, bloats test suites, and turns routine code reading into archaeology.

As we saw first-hand, it even misleads AI. Claude confidently incorporated a dead module into a flow diagram of our pay guarantee process — because the code looked live. That moment of confusion, and the self-correction that followed, is a perfect metaphor for where we are with AI-assisted engineering today: powerful, promising, and most effective when paired with good runtime context and human judgment.

The goal isn't a perfect codebase. It's a codebase where the code you're reading is the code that's actually running. That's a goal worth shipping toward.

No comments: