April 11, 2026
The architectural decision layer AI agents need (and why it's not optional)
AI coding agents give confidently wrong advice on complex projects because they have no memory of the decisions that shaped your architecture. Here's what's actually missing.
You’re deep in a refactor with Claude Code. The codebase is mature, the tradeoffs are real, and over the past few months you’ve made a dozen non-obvious architectural calls. Use Postgres instead of SQLite because the write contention at scale. Keep the service layer thin because the previous thick-service attempt caused circular dependency hell. Don’t abstract the retry logic yet because two of the three callers have diverging requirements.
Claude doesn’t know any of that. So it suggests SQLite for the new module. It moves logic into the service layer. It abstracts the retry logic. All confident, all reasonable on their face, all wrong for this project specifically.
This is the architectural decision layer problem. It’s not about the AI being bad at coding. It’s about the AI having no access to the reasoning layer that sits above the code.
Most projects don’t document this reasoning anywhere. It lives in Slack threads, in PR comments that get buried, in the heads of whoever was in the room when the decision was made. The code itself shows what was decided, but not why, and definitely not what constraints that decision places on everything that comes after it. When you bring an AI agent into a codebase, it reads the code but misses the entire constraint graph underneath it.
The result is an agent that’s productive on greenfield work and measurably wrong on complex projects. Not wrong in obvious ways, like syntax errors. Wrong in ways that violate invariants you’ve spent months establishing. It gives you advice that would work in a vacuum and breaks things in context.
What actually helps is capturing the decision layer as decisions are made. When you choose Postgres over SQLite, that’s not just a config change, it’s a constraint on every data layer decision that follows. When you flatten the service layer after a bad experience with the alternative, that’s architectural memory that should travel with the codebase. The code can’t communicate this. Comments don’t either, not reliably. You need something that captures high-signal commits and the rationale behind them, and surfaces that rationale to your AI agent at the start of every session.
KeepGoing’s decision detection does exactly this. It watches for commits and notes that signal architectural choices, the kind of thing that shows up in a commit message like “revert service consolidation, circular dep issue” or “switch auth to stateless JWT, Redis session overhead not worth it.” It pulls those decisions into the session context that gets handed to your AI agent each time you open a project. So when Claude Code starts a new session, it gets not just “here’s what you were working on” but “here are the constraints that are in play.”
If you’re using Claude Code or Cursor on a project that’s been alive for more than a few months, try this: open a new session and ask Claude to suggest an approach to something you’ve already decided. See what it says. If it matches what you actually chose and explains why, great. If it confidently recommends the thing you already tried and abandoned, that’s the gap. That’s what a decision layer is for.