Arrange Act Assert

Jag Reehals thinking on things, mostly product development

Message Isolation? Autotel Makes Tenant Context Flow

17 Jan 2026

The Signadot team is spot on in their Testing Event-Driven Architectures with OpenTelemetry post.

Message isolation using a shared queue: propagate tenant ID in Kafka message headers; consumers use tenant ID for selective message consumption.

They make the case that infrastructure duplication is expensive. Instead of separate Kafka clusters per environment, use tenant ID filtering on a shared queue. Instrument producers and consumers for context propagation.

We've all been there: maintaining four "identical" Kafka setups that slowly drift apart.

Their key insight:

Requires modifying consumers and using OpenTelemetry for context propagation.

But there's still a gap...

Read More →

Request-Level Isolation? Autotel Propagates Context Automatically

16 Jan 2026

The CNCF team is spot on in their Testing Asynchronous Workflows using OpenTelemetry and Istio post.

Request-level isolation is the most cost-effective approach.

They make the case against duplicating infrastructure for testing. Instead of spinning up separate Kafka clusters per tenant, use OpenTelemetry Baggage to propagate tenant ID through async flows. Consumers filter by tenant ID. Istio handles routing.

We've all been there: every team has their own "staging Kafka" and costs balloon.

Their key insight:

Use OpenTelemetry Baggage to propagate tenant ID through sync and async. When publishing to Kafka, producers inject trace context (including baggage) into message headers; consumers extract and make routing decisions.

But there's still a gap...

Read More →

End-to-End Tracing? Autotel Makes It Automatic

15 Jan 2026

The OSO team is spot on in their End-to-End Tracing in Event Driven Architectures post.

Traces break at queues unless you extract context from message headers and put it in the appropriate context.

They walk through the real pain: stateful processing loses trace context in caches, Kafka Connect can only do batch-level tracing, and every team ends up writing custom interceptors and state store wrappers.

We've all been there.

Their key insight:

In Kafka Streams and Kafka Connect this often means manual work: interceptors, state stores, batch spans, or extending tracing logic to extract from headers.

But there's still a gap...

Read More →

Logging Sucks. Autotel Makes Wide Events the Default

08 Jan 2026

Boris is spot on in his Logging Sucks post

logs are optimised for writing, not querying

He explains why debugging in production feels like archaeology.

You grep for user-123, find it logged 47 different ways, then spend an hour correlating timestamps across services.

We've all been there.

His wide event example nails it:

{
  "user": {"id": "user_456", "subscription": "premium", "lifetime_value_cents": 284700},
  "cart": {"item_count": 3, "total_cents": 15999, "coupon_applied": "SAVE20"},
  "payment": {"method": "card", "provider": "stripe", "latency_ms": 1089},
  "error": {"type": "PaymentError", "code": "card_declined", "stripe_decline_code": "insufficient_funds"}
}

One event. High-cardinality keys (user.id, traceId). High dimensionality. Queryable.

But there’s still a gap…

Read More →

Reranking: Improving Search Relevance with the AI SDK

23 Dec 2025

Reranking improves search relevance by reordering documents based on their relevance to a query. Unlike embedding-based similarity search, reranking models are specifically trained to understand the relationship between queries and documents, often producing more accurate relevance scores.

Reranking

Read More →

AI Code Needs Rules, Not Rituals

20 Oct 2025

Coding agents are here to stay, and I know I’m absolutely right about that. While we're all getting used to workflows using AI-powered coding agents, we now live in the world of dark arts and rituals.

We spend hours tweaking prompts, creating elaborate Claude.md, Agents.md and other files formatted in a particular way, stored in a particular way essentially performing black magic to hope our LLM agent adheres to our team's best practices and coding patterns.

On a good day, it works. On others, the behaviour is random.

Here's the problem: Random is not good enough for production.

We're trying to force a non-deterministic, generative tool to be a deterministic rule-follower. This is the wrong approach.

Instead, we should let AI do what it does best: be creative and generative, helping us achieve and realise our desired outcomes while following instructions and examples for how, what, and where it should generate.

My advice? Stop relying on hope-driven prompting. Start using linters to guarantee your standards.

Agent

Read More →

Agent: The Sugar Syntax of streamText

19 Sep 2025

For most of us, AI still feels like a black box. We send it a prompt and we get back a blob of text. Maybe we write some code to call a tool; maybe we juggle a few callbacks. We tell ourselves that this is just how things work: a model can only generate tokens, and tools can only run in our code.

But what if this mental model is the problem?

In this post I want to argue that the Agent pattern in the AI SDK is as revolutionary for AI development as useState and useEffect were for React. Just like React's client/server directives annotate where code runs across the network, the Agent API annotates where logic runs across the AI/model boundary.

Agent

Read More →

Build Safer, Faster, and More Reliable AI Apps with AI SDK Middleware

27 Aug 2025

Having recently built an AI Guardrails library for the AI SDK, I wanted to share what I learned along the way. This post will walk you through how you can write your own middleware, and why it's such a game-changer for building robust AI applications.

Design AI features that are safer, faster, and easier to evolve by layering language model middleware. This guide explains how to use AI SDK middleware to transform inputs, post-process outputs, enforce safety rules, cache results, observe performance, and handle streaming using a clean, composable approach aligned with official guidance.

Read More →