As of today, Opus 4.5 is the best coding model I've used. That is not praise by vibes. That is, after building libraries and utilities that fixed problems I could not solve with the tools I was using before.
The progress is impressive.
However, it’s not all sunshine and rainbows, as people on social media and YouTube claim.
AI coding agents aren't going anywhere. They're excellent at exploring ideas, generating boilerplate, and moving fast. But speed without reliability just ships bugs faster. And without constraints, AI-generated code is unreliable by default.
Coding agents are here to stay, and I know I’m absolutely right about that. While we're all getting used to workflows using AI-powered coding agents, we now live in the world of dark arts and rituals.
We spend hours tweaking prompts, creating elaborate Claude.md, Agents.md and other files formatted in a particular way, stored in a particular way essentially performing black magic to hope our LLM agent adheres to our team's best practices and coding patterns.
On a good day, it works. On others, the behaviour is random.
Here's the problem: Random is not good enough for production.
We're trying to force a non-deterministic, generative tool to be a deterministic rule-follower. This is the wrong approach.
Instead, we should let AI do what it does best: be creative and generative, helping us achieve and realise our desired outcomes while following instructions and examples for how, what, and where it should generate.
My advice? Stop relying on hope-driven prompting. Start using linters to guarantee your standards.