Arrange Act Assert

Jag Reehals thinking on things, mostly product development

Dual-Layer Locking: Redis for Speed, PostgreSQL for Reliability

02 Jun 2025

Two users withdraw money from the same account at exactly the same moment. Your system processes both requests, your database balance goes negative, and you wake up to an incident report. Sound familiar?

What if you could combine Redis's millisecond response times with PostgreSQL's bulletproof consistency? This dual-layer locking pattern does exactly that, giving you both speed and safety.

Redis and PostgreSQL logos side by side.

In this post, we'll explore a pattern that combines Redis's speed with PostgreSQL's reliability to prevent race conditions at scale.

The Problem: Race Conditions at Scale

Consider this scenario: 100 concurrent requests to update the same user account. Without proper locking:

Reads £100

Reads £100

Reads £100

Writes £80

Writes £50

Writes £120

Should be £50

Request 1: Update Balance

Database

Request 2: Update Balance

Request 3: Update Balance

Final Balance: £120

Figure 1: Race condition scenario with concurrent balance updates.

The last write wins, and you've lost £70. In production, the 'thundering herd' problem where many processes try to acquire a lock at once can make things even worse by overwhelming your system.

The Solution: Dual-Layer Defense

Think of it like securing a bank vault, you need both the outer perimeter (Redis) and the inner vault door (PostgreSQL):

Step 1

Step 2

Step 3

Incoming Request

Redis Lock
Fast Distribution

PostgreSQL Transaction
Data Integrity

Safe Update

Figure 2: Dual-layer defence—Redis for fast distribution, PostgreSQL for data integrity.

Layer 1: Redis Distributed Lock

Layer 2: PostgreSQL Row Lock

What is Optimistic Locking?

Optimistic locking is a technique that allows multiple transactions to proceed without locking resources up front. Instead, each transaction checks whether the data has changed before committing. If another transaction has modified the data, the update fails, and the process can retry. This helps catch rare edge cases where data changes between read and write.

Production-Ready Implementation

Acquired

Failed

Yes

No

Yes

No

Request

Redis Lock?

DB Transaction

Exponential Backoff

Retry?

Return Error

SELECT FOR UPDATE

Version Check

Valid?

Update & Commit

Rollback & Retry

Release Lock

Success

Figure 4: Dual-layer locking process from request to success or error.

Which can be implemented in code like this:

// TransferRequest defines the structure for a money transfer operation
interface TransferRequest {
  fromAccountId: string;
  toAccountId: string;
  amount: number;
}

class AccountService {
  async transferMoney({ fromAccountId, toAccountId, amount }: TransferRequest) {
    // CRITICAL: Always acquire locks in consistent order to prevent deadlocks
    const [firstId, secondId] = [fromAccountId, toAccountId].sort();
    const lockKey = `transfer:${firstId}:${secondId}`;

    // Acquire a distributed lock in Redis
    const lock = await this.acquireRedisLock(lockKey);

    try {
      await this.db.query('BEGIN');

      // Lock rows in same order as Redis locks
      const accounts = await this.db.query(
        `
        SELECT id, balance, version 
        FROM accounts 
        WHERE id IN ($1, $2) 
        ORDER BY id
        FOR UPDATE
      `,
        [firstId, secondId]
      );

      const fromAccount = accounts.rows.find((a) => a.id === fromAccountId);
      const toAccount = accounts.rows.find((a) => a.id === toAccountId);

      if (fromAccount.balance < amount) {
        throw new InsufficientFundsError(
          `Balance: ${fromAccount.balance}, Required: ${amount}`
        );
      }

      // Atomic updates with optimistic locking
      const results = await Promise.all([
        this.db.query(
          `
          UPDATE accounts 
          SET balance = balance - $1, version = version + 1, updated_at = NOW()
          WHERE id = $2 AND version = $3
        `,
          [amount, fromAccountId, fromAccount.version]
        ),

        this.db.query(
          `
          UPDATE accounts 
          SET balance = balance + $1, version = version + 1, updated_at = NOW()
          WHERE id = $2 AND version = $3  
        `,
          [amount, toAccountId, toAccount.version]
        ),
      ]);

      // Verify both updates succeeded (optimistic lock check)
      if (results.some((result) => result.rowCount === 0)) {
        throw new ConcurrentModificationError(
          'Account was modified by another transaction'
        );
      }

      await this.db.query('COMMIT');
    } catch (error) {
      await this.db.query('ROLLBACK');
      throw error;
    } finally {
      // Always release the Redis lock
      await this.releaseRedisLock(lock);
    }
  }

  // Acquire a distributed lock in Redis with retries and exponential backoff
  private async acquireRedisLock(
    key: string,
    timeoutMs = 30000
  ): Promise<Lock> {
    const token = generateUUID();
    const maxRetries = 3;
    let attempt = 0;

    while (attempt < maxRetries) {
      const acquired = await this.redis.set(key, token, 'PX', timeoutMs, 'NX');

      if (acquired) {
        return { key, token, timeoutMs };
      }

      // Exponential backoff: 50ms, 100ms, 200ms
      const delay = 50 * Math.pow(2, attempt);
      await new Promise((resolve) => setTimeout(resolve, delay));
      attempt++;
    }

    throw new LockAcquisitionError(
      `Failed to acquire lock after ${maxRetries} attempts`
    );
  }

  // Release the Redis lock using an atomic Lua script
  private async releaseRedisLock({ key, token }: Lock): Promise<void> {
    // Lua script ensures atomic check-and-delete
    const script = `
      if redis.call("GET", KEYS[1]) == ARGV[1] then
        return redis.call("DEL", KEYS[1])
      else
        return 0
      end
    `;

    await this.redis.eval(script, 1, key, token);
  }
}

Critical Design Patterns

1. Lock Ordering Prevents Deadlocks

PostgreSQLRedisTransfer B→ATransfer A→BPostgreSQLRedisTransfer B→ATransfer A→BBoth sort account IDs: A, BWaits for T1Lock "transfer:A:B"Lock "transfer:A:B"Lock accounts A, B (sorted order)Complete transferRelease lockAcquire lock "transfer:A:B"Lock accounts A, BComplete transfer
Figure 3: Consistent lock ordering prevents deadlocks.

Why this matters: Without consistent ordering, Transfer A→B and Transfer B→A can deadlock each other.

2. Exponential Backoff Handles Contention

// Bad: Hammers Redis with retries (thundering herd problem)
for (let i = 0; i < 100; i++) {
  if (await tryLock()) break;
  // Retry immediately - creates thundering herd
}

// Good: Exponential backoff with jitter
const delay = Math.min(
  baseDelay * Math.pow(2, attempt) + Math.random() * 100,
  maxDelay
);

3. Optimistic Locking Catches Edge Cases

Even with locks in place, version checks help catch rare edge cases where another transaction modifies the row between reads and writes:

// This will fail if another transaction modified the row
UPDATE accounts
SET balance = balance - $1, version = version + 1
WHERE id = $2 AND version = $3  -- This condition fails if row was modified

Error Handling That Actually Works

class TransferService {
  async transferWithRetry(request: TransferRequest): Promise<TransferResult> {
    const maxRetries = 3;
    let lastError: Error;

    for (let attempt = 1; attempt <= maxRetries; attempt++) {
      try {
        return await this.transferMoney(request);
      } catch (error) {
        lastError = error;

        // Don't retry business logic errors
        if (
          error instanceof InsufficientFundsError ||
          error instanceof InvalidAccountError
        ) {
          throw error;
        }

        // Retry on infrastructure failures
        if (
          error instanceof LockAcquisitionError ||
          error instanceof ConcurrentModificationError
        ) {
          if (attempt === maxRetries) break;

          const delay = this.calculateBackoff(attempt);
          await this.sleep(delay);
          continue;
        }

        // Unknown error - don't retry
        throw error;
      }
    }

    throw new MaxRetriesExceededError(
      `Failed after ${maxRetries} attempts`,
      lastError
    );
  }

  private calculateBackoff(attempt: number): number {
    // Exponential backoff with jitter: 100ms, 200ms, 400ms
    const base = 100 * Math.pow(2, attempt - 1);
    const jitter = Math.random() * 100;
    return Math.min(base + jitter, 5000); // Cap at 5 seconds
  }
}

When NOT to Use This Pattern

This pattern adds complexity. Don't use it when:

Alternative approaches:

Monitoring and Observability

Track these metrics to catch problems early:

// Key metrics to monitor
interface LockingMetrics {
  lockAcquisitionTime: Histogram; // Should be <10ms p99
  lockHoldTime: Histogram; // Should be <100ms p99
  lockContentionRate: Counter; // Failures due to contention
  deadlockCount: Counter; // Should be zero
  optimisticLockFailures: Counter; // Version conflicts
}

// Alert on these conditions
if (lockAcquisitionTime.p99 > 50) {
  alert('High Redis lock contention');
}

if (deadlockCount > 0) {
  alert('Deadlock detected - check lock ordering');
}

Conclusion

Dual-layer locking isn't just about preventing race conditions—it's about building systems that stay fast under load while maintaining strict consistency guarantees.

This pattern powers financial transactions, inventory management, and any system where correctness and performance both matter. The complexity pays for itself when you're processing millions of pounds and can't afford to be wrong.

distributed systems redis postgresql