node-env-resolver Makes Safe, Typed Node Config the Default
23 Apr 2026Liran Tal is spot on in his Environment variables and configuration anti-patterns in Node.js applications post
You may inadvertently expose sensitive information like database credentials and API keys as part of error messages, stack traces, and other forms of data returned to consuming clients.
He explains why process.env feels safe right up until it isn't.
You add dotenv.config() on line one, scatter process.env.DB_PASSWORD across twelve files, then someone's error reporter serialises a request object and your Stripe key ends up in a third-party log.
If you've shipped a Node app, you've probably seen some version of this happen.
His anti-pattern example nails it:
const port = process.env.PORT || 3000;
const dbUsername = process.env.DB_USERNAME;
const dbPassword = process.env.DB_PASSWORD;
const dbHost = process.env.DB_HOST;
const apiBaseUrl = process.env.API_BASE_URL;
const apiToken = process.env.API_TOKEN;
Untyped. Unvalidated. Globally readable. One step away from leaking through logs, traces, or error reporters.
But there's still a gap...
Liran's article names the anti-patterns. It stops short of offering a concrete implementation.
node-env-resolver is a strong answer to that gap. It resolves config into a typed object, limits reliance on global process.env, blocks .env files in production by default, and adds runtime redaction safeguards.
The Problem You're Solving
A single Node app reads config from five places:
// server.ts
const port = process.env.PORT || 3000;
// database.ts
const dbUrl = process.env.DATABASE_URL;
// auth.ts
const jwtSecret = process.env.JWT_SECRET;
// routes.ts
const apiKey = process.env.API_KEY;
// stripe.ts
const stripeKey = process.env.STRIPE_SECRET_KEY;
Five files. Five untyped reads. Zero validation. Five modules that can be tested only by mutating process.env in beforeEach.
And every one of those values sits in global process state, where any module can reach it and where logs, traces, serialised errors, or child-process boundaries can expose it.
One Schema. Zero process.env Writes
With node-env-resolver, the same app gets one typed config object, assembled at startup, injected into the modules that need it:
import { resolve } from 'node-env-resolver';
import {
postgres,
string,
number,
secret,
} from 'node-env-resolver/validators';
export const config = resolve({
PORT: number({ default: 3000 }),
DATABASE_URL: postgres(),
JWT_SECRET: secret(),
API_KEY: string({ min: 32 }),
STRIPE_SECRET_KEY: string(),
});
If DATABASE_URL is missing, the app fails at startup with a descriptive error. Not on the first request. Not in production at 3am. At startup.
config.PORT is number. config.DATABASE_URL is string and guaranteed to parse as a Postgres URL. process.env.STRIPE_SECRET_KEY stays undefined, because preventProcessEnvWrite is on by default.
One typed config object, validated once at boot.
Which values are enforced?
Every key in your schema. Required validators throw if missing. Bare literals act as typed defaults. Array literals become enums:
const config = resolve({
PORT: 3000, // number, default 3000
HOST: 'localhost', // string, default
NODE_ENV: ['development', 'production', 'test'] as const, // enum, required
DATABASE_URL: postgres(), // required, parsed
API_KEY: string({ min: 32, pattern: '^[a-zA-Z0-9]+$' }), // required, validated
});
Safety rails
In production, node-env-resolver applies stricter policies automatically: .env files are blocked, audit logging is on, process.env writes are prevented, missing required keys fail fast. The application code stays the same, while production gets stricter defaults automatically.
Manual process.env vs node-env-resolver
The manual approach reads process.env directly, with a light dusting of defaults:
// Manual: scattered, untyped, unsafe
import 'dotenv/config';
export function createDatabase() {
const url = process.env.DATABASE_URL;
if (!url) throw new Error('DATABASE_URL required');
return connect(url, {
poolSize: Number(process.env.DB_POOL_SIZE) || 10,
});
}
export function createAuth() {
return jwt.sign({
secret: process.env.JWT_SECRET,
expiry: process.env.JWT_EXPIRY || '1h',
});
}
With node-env-resolver, config is a typed value you pass in:
import { resolve } from 'node-env-resolver';
import {
postgres,
secret,
number,
duration,
} from 'node-env-resolver/validators';
export const config = resolve({
DATABASE_URL: postgres(),
DB_POOL_SIZE: number({ default: 10 }),
JWT_SECRET: secret(),
JWT_EXPIRY: duration({ default: '1h' }),
});
export function createDatabase(
cfg: Pick<typeof config, 'DATABASE_URL' | 'DB_POOL_SIZE'>,
) {
return connect(cfg.DATABASE_URL, { poolSize: cfg.DB_POOL_SIZE });
}
export function createAuth(
cfg: Pick<typeof config, 'JWT_SECRET' | 'JWT_EXPIRY'>,
) {
return jwt.sign({ secret: cfg.JWT_SECRET, expiry: cfg.JWT_EXPIRY });
}
No process.env reads inside modules. No coercion. No runtime undefined. Tests pass a literal config object, no vi.mock required.
| Manual dotenv / process.env | node-env-resolver |
|---|---|
process.env.FOO may be undefined |
Inferred types: number, string, enum literals |
Values written to process.env |
preventProcessEnvWrite on by default |
.env works everywhere, including prod |
Blocked in production by default |
| No validation | Required validators throw at startup |
| No provenance | Every value records its source |
| Secrets may leak via logs/errors | protect() patches console and redacts |
Test by mutating process.env |
Test by passing a plain object |
On "dotenv Increases Exposure"
He's right to flag it.
dotenv.config() loads your .env into process.env, and process.env is global state. Any module, any dependency, any serialised error object can read it. That doesn't guarantee a leak, but it widens the surface area considerably, and most leaks I've seen in the wild took this path.
node-env-resolver treats process.env as an input, not an output. Resolved values land in a typed object that only the modules you inject into can see. In production, .env files are refused by default:
// In production, dotenv is ignored by default.
// Config must come from processEnv, secrets managers, or mounted files.
const config = await resolveAsync({
resolvers: [
[processEnv(), { PORT: number({ default: 3000 }) }],
[awsSecrets({ secretId: 'prod/app' }), {
DATABASE_URL: postgres(),
JWT_SECRET: secret(),
STRIPE_SECRET_KEY: string(),
}],
],
options: {
policies: {
enforceAllowedSources: {
DATABASE_URL: ['aws-secrets'],
JWT_SECRET: ['aws-secrets'],
STRIPE_SECRET_KEY: ['aws-secrets'],
},
},
},
});
enforceAllowedSources means a deploy that accidentally drops STRIPE_SECRET_KEY into process.env fails startup instead of silently overriding the value from AWS.
Runtime Redaction Without the Boilerplate
Knowing the secrets never reach process.env is half the battle. The other half is keeping them out of logs and HTTP responses.
import { resolve } from 'node-env-resolver';
import {
protect,
createResponseMiddleware,
} from 'node-env-resolver/runtime';
import { string, secret } from 'node-env-resolver/validators';
const config = resolve({
API_KEY: string(),
DB_PASSWORD: secret(),
});
const unprotect = protect(config);
console.log(`Connecting with key: ${config.API_KEY}`);
// Connecting with key: [REDACTED]
app.use(createResponseMiddleware(config));
// Response bodies are scanned for resolved secret values and redacted before send.
protect() patches console.log, console.error, and console.warn to match the resolved secret values and replace them with [REDACTED]. The Express and Hono middleware do the same scan on response bodies.
It's a safety net, not a silver bullet. If a secret arrives through a different channel, or gets transformed before logging, the redactor won't catch it. Treat it as defence in depth on top of not leaking in the first place.
Provenance When You Need It
When an incident hits and someone asks "where did this value come from", most apps have no answer. node-env-resolver records the source of every resolved value, auto-enabled in production:
import { getAuditLog, createDebugView } from 'node-env-resolver';
const events = getAuditLog();
// env_loaded key=DATABASE_URL source=processEnv
// env_loaded key=API_KEY source=aws-secrets resolvedVia=aws-sm://prod/api-key
const debug = createDebugView(config);
// DATABASE_URL: [REDACTED] source: processEnv
// API_KEY: [REDACTED] source: aws-secrets
getAuditLog() gives event history, createDebugView(config) gives a redacted current snapshot, and provenance tracks source metadata in both views. Raw values are never included, only names and sources.
Adopting it in an existing app
If you're retrofitting this into a running service, the order matters more than the final shape. I'd do it in three passes:
- Create a single
config.tsand resolve everything there. Don't change any callers yet. Just prove the schema validates at startup in every environment. - Move modules off direct
process.envreads one at a time, passingPick<typeof config, ...>into the functions that need them. Tests get easier with each one. - Turn on the stricter policies last:
enforceAllowedSources, response middleware, audit logging. By this point the schema is the source of truth and you can tighten without hunting.
The mistake is doing all three at once. Each pass is a small, safe PR. Together they replace the ad-hoc config layer without a big-bang rewrite.
Try It Yourself
npm install node-env-resolver
# Optional integrations
npm install node-env-resolver-aws
Minimal usage, sync:
import { resolve } from 'node-env-resolver';
import { postgres, string, number } from 'node-env-resolver/validators';
export const config = resolve({
PORT: number({ default: 3000 }),
DATABASE_URL: postgres(),
API_KEY: string({ min: 32 }),
});
app.listen(config.PORT);
With AWS Secrets Manager, async:
import { resolveAsync } from 'node-env-resolver';
import { processEnv } from 'node-env-resolver/resolvers';
import { awsSecrets } from 'node-env-resolver-aws';
import { postgres, string, number } from 'node-env-resolver/validators';
export const config = await resolveAsync({
resolvers: [
[processEnv(), { PORT: number({ default: 3000 }) }],
[awsSecrets({ secretId: 'prod/app' }), {
DATABASE_URL: postgres(),
API_KEY: string(),
}],
],
});
Scan a repo for hardcoded secrets before they ship:
npx node-env-resolver scan src/
npx node-env-resolver scan --staged
The Config You Want
One file, every environment:
// config.ts
import { resolve } from 'node-env-resolver';
import {
postgres,
redis,
string,
number,
boolean,
duration,
email,
secret,
} from 'node-env-resolver/validators';
export const config = resolve({
PORT: number({ default: 3000 }),
NODE_ENV: ['development', 'production', 'test'] as const,
DATABASE_URL: postgres(),
DB_POOL_SIZE: number({ default: 10 }),
REDIS_URL: redis({ optional: true }),
CACHE_TTL: duration({ default: '5m' }),
JWT_SECRET: secret(),
JWT_EXPIRY: duration({ default: '1h' }),
ADMIN_EMAIL: email(),
ENABLE_METRICS: boolean({ default: false }),
LOG_LEVEL: ['debug', 'info', 'warn', 'error'] as const,
});
Modules take what they need:
// database.ts
export function createDatabase(
cfg: Pick<typeof config, 'DATABASE_URL' | 'DB_POOL_SIZE'>,
) {
return connect(cfg.DATABASE_URL, { poolSize: cfg.DB_POOL_SIZE });
}
// server.ts
import { config } from './config';
import { createDatabase } from './database';
const db = createDatabase(config);
app.listen(config.PORT);
Tests stop mocking process.env:
it('connects with the provided URL', () => {
const db = createDatabase({
DATABASE_URL: 'postgres://testuser:testpass@localhost:5432/testdb',
DB_POOL_SIZE: 2,
});
// assertions
});
Zod and Valibot schemas are supported via resolveZod and resolveValibot if you're already using them.