Arrange Act Assert

Jag Reehals thinking on things, mostly product development

AI Code Needs Rules, Not Rituals

20 Oct 2025

Coding agents are here to stay, and I know I’m absolutely right about that. While we're all getting used to workflows using AI-powered coding agents, we now live in the world of dark arts and rituals.

We spend hours tweaking prompts, creating elaborate Claude.md, Agents.md and other files formatted in a particular way, stored in a particular way essentially performing black magic to hope our LLM agent adheres to our team's best practices and coding patterns.

On a good day, it works. On others, the behaviour is random.

Here's the problem: Random is not good enough for production.

We're trying to force a non-deterministic, generative tool to be a deterministic rule-follower. This is the wrong approach.

Instead, we should let AI do what it does best: be creative and generative, helping us achieve and realise our desired outcomes while following instructions and examples for how, what, and where it should generate.

My advice? Stop relying on hope-driven prompting. Start using linters to guarantee your standards.

Agent

The Problem with Hope-Driven Development

After months of working with AI coding agents across different projects, I've noticed a pattern that's become impossible to ignore. We've all been there: spending hours crafting the perfect prompt, organising our project files just so, and crossing our fingers that this time, the AI will follow our coding standards.

But here's what I've learned: hope is not a strategy for production code.

In this article, I'll show you why relying on prompts alone is fundamentally flawed, and how to build a deterministic system that guarantees your AI-generated code meets your standards. You'll discover practical examples of linting rules that solve real problems, learn how to create a self-correcting AI workflow, and understand why this approach future-proofs your development process.

By the end, you'll have a concrete plan to stop hoping for quality and start engineering it.

A Lesson from the Trenches

Last month, I spent three hours debugging why our AI-generated React components were causing bundle size warnings. The issue? Inconsistent import patterns across 47 files. Some used import { z } from 'zod', others used import * as z from 'zod'. Our AI agent had been trained on mixed examples and couldn't maintain consistency.

That's when I realised: we were asking the wrong question. Instead of "How can we make our AI more consistent?", we should have been asking "How can we guarantee consistency regardless of AI behaviour?"

The solution wasn't better prompting—it was better tooling.

Example 1: The Subtle Art of Imports

It's not just about correctness; consistency and performance matter too.

Consider this common Zod import:

// ❌ This *can* pull in unnecessary bytes depending on your bundler
// Some bundlers may not tree-shake this effectively
import { z } from 'zod';

Many teams, aiming for optimal bundle sizes, prefer this pattern:

// ✅ This is often more efficient and easier for tree-shaking
// Bundlers can more reliably eliminate unused exports
import * as z from 'zod';

Will your AI agent know this? Maybe. Will it remember 100% of the time? Absolutely not.

Instead of adding this trivia to a prompt file you hope the AI reads, just add a dedicated lint rule. The community has already solved this with eslint-plugin-import-zod. Using this lint rule ensures that both AI and we will use the most optimal way for importing Zod.

Example 2: The React Hooks Minefield

Let's talk about React. The Rules of Hooks are notoriously tricky. There's advice, for the advice, for which there is more guidance, and then a clarification on that makes it confusing for us, and who knows what patterns and docs AI has trained on.

With so many edge cases, models often make hook mistakes.

Thankfully, the React team provides a deterministic solution for this: eslint-plugin-react-hooks.

This is where the "prompting" strategy truly falls apart. That ESLint plugin was recently updated to support the new React Compiler and the useEffectEvent API.

Now, ask yourself:

This is the "LLM Lag" in action. Remember: LLMs are trained on older codebases. They often won't know the latest patterns or subtle rule changes. While MCP servers such as context7 can help, it's still up to your coding tool to first use the tools available and then act on the information retrieved.

In contrast, ESLint and other linting tools provide the deterministic check you need, ensuring that even AI-generated code adheres to the most modern React practices.

Why ESLint? Because Determinism is Your Reassurance

In a world of AI randomness, ESLint delivers determinism.

Why risk your code quality and standards on chance?

Integrating ESLint into your workflow provides a reliable safety net and ensures code meets production quality standards.

The "Self-Correcting" AI Workflow

This is where it gets truly powerful. You don't just run the linter at the end. You build a feedback loop.

Imagine this workflow:

  1. The AI coding agent generates a new component or feature.
  2. A hook (e.g., a file-save hook in your IDE, or post-tool use hook ) that runs eslint --fix on the generated files.
  3. The agent sees the linting errors or the resulting code changes.
  4. In its next turn, the AI can self-correct based on the deterministic feedback from the linter.

The AI remains fast, but lint feedback keeps code on track with your quality standards.

Here's how this self-correcting workflow looks in practice:

Yes

No

AI Agent Generates Code

File Save Hook Triggers

ESLint Runs Automatically

Linting Errors?

ESLint Auto-Fixes Issues

Code Meets Standards

AI Sees Fixed Code

AI Learns from Corrections

Next Generation Improved

This creates a continuous improvement loop where the AI learns from deterministic feedback, becoming more aligned with your standards over time.

Prevent Vendor Lock-in

While we hope for standardisation, the AI market is a fast-moving, innovative space. Today’s way of doing things becomes the old way. I’ve seen this with Cursor, and now we have skills with Claude. I use a variety of AI tools as they all have their strengths and weaknesses, and so linting rules offer the consistency I can rely on.

The Added Bonus: Future-Proofing Your Codebase

This workflow brings huge long-term benefits.

Next month, when a new React pattern emerges or your team discovers a new performance optimisation, you don't need to retrain or re-prompt your AI.

You just add the new ESLint rule.

From that moment on, you can be assured that any AI-generated code will follow this new best practice.

Even better, you can now ask the AI to help you refactor your existing codebase, and the linter will act as its guide, ensuring the improvements are applied consistently.

Key Takeaways

  1. Stop hoping for quality. Engineer it. Replace hope-driven prompting with deterministic linting rules that guarantee your standards.

  2. Build a self-correcting feedback loop. Integrate ESLint into your AI workflow so the agent learns from deterministic corrections.

  3. Future-proof your approach. When new best practices emerge, add a lint rule rather than updating prompts across multiple files.

  4. Maintain consistency across tools. Linting rules work regardless of which AI tool you're using, preventing vendor lock-in.

Your Next Steps

  1. Audit your current prompts. Identify which "rules" in your prompt files could be enforced with linting instead.

  2. Start with one rule. Pick the most common inconsistency in your AI-generated code and create an ESLint rule for it.

  3. Set up the feedback loop. Configure your IDE or build process to run eslint --fix automatically on AI-generated files.

  4. Gradually expand. Add more rules as you discover patterns, building a comprehensive quality system.

A Balanced Approach

This isn't about abandoning prompts entirely. High-level guidance, team conventions, and architectural decisions still benefit from well-crafted prompts. But for the nitty-gritty details, import patterns, hook usage, naming conventions, let your linters do the heavy lifting.

Stop hoping for quality. Engineer it.

Let your AI be the generative engine, and let your linters be the deterministic, ever-updatable guardrails that future-proof your codebase.

When a new best practice emerges, you won't need to re-prompt your markdown files; you'll just add a rule.

ai coding