Elysia (Bun)
Implement OpenTelemetry instrumentation for Elysia applications running on
the Bun runtime to enable distributed tracing, custom metrics, and structured
logging. This guide shows you how to use the OpenTelemetry Node.js SDK with
Bun's --preload flag, create manual spans for Elysia route handlers,
auto-instrument PostgreSQL queries through Drizzle ORM, and propagate trace
context across services -- all without relying on getNodeAutoInstrumentations().
Elysia on Bun requires a different instrumentation approach than traditional
Node.js frameworks. Because Bun does not use Node's http module internally,
HTTP auto-instrumentation cannot intercept Elysia requests. Instead, you create
targeted manual spans with a traced() wrapper function and rely on
@opentelemetry/instrumentation-pg for automatic database span generation.
The result is a lean, precise instrumentation setup with full control over span
names, attributes, and context propagation.
Whether you're building with Bun for its startup speed, migrating from Node.js-based frameworks, or evaluating Elysia for a new microservice, this guide provides production-ready configurations for OpenTelemetry on the Bun runtime.
Create a tracing.ts file with NodeSDK + PgInstrumentation +
LoggerProvider, preload it with bun run --preload ./src/tracing.ts, and
wrap route handlers with a traced() helper that calls
tracer.startActiveSpan(). Database spans from Drizzle ORM are captured
automatically through the pg driver instrumentation.
Who This Guide Is For
This documentation is designed for:
- Elysia developers: adding observability to Bun-based APIs for the first time
- Bun adopters: navigating the differences between Bun and Node.js OpenTelemetry support
- DevOps engineers: deploying Elysia/Bun services with production monitoring and container orchestration
- Engineering teams: migrating from DataDog, New Relic, or other commercial APM solutions to open-source observability
- Backend developers: debugging performance issues or tracing requests across multiple Bun-based microservices
Overview
This guide demonstrates how to:
- Set up the OpenTelemetry Node.js SDK on the Bun runtime
- Preload instrumentation with
bun run --preloadfor early initialization - Create manual spans for Elysia route handlers using a
traced()wrapper - Auto-instrument PostgreSQL queries via
@opentelemetry/instrumentation-pg - Build a custom OTel logger with
@opentelemetry/api-logsand stdout mirror - Implement custom metrics with
articles.createdcounters - Propagate trace context across services with
propagation.inject/extract - Use Elysia's
t.Object()validation with type-safe request bodies - Export traces, metrics, and logs to base14 Scout via OTLP HTTP
- Deploy with Docker using
oven/bun:1.3-alpinebase images
Prerequisites
Before starting, ensure you have:
- Bun 1.3 or later installed
- Elysia 1.4 or later installed in your project
- Scout Collector configured and accessible from your application
- See Docker Compose Setup for local development
- See Kubernetes Helm Setup for production deployment
- Basic understanding of OpenTelemetry concepts (traces, spans, attributes)
Compatibility Matrix
| Component | Minimum Version | Recommended Version | Notes |
|---|---|---|---|
| Bun | 1.1.0 | 1.3.x | Node.js compat layer required |
| Elysia | 1.0.0 | 1.4.x | Latest v1 with plugin system |
| TypeScript | 5.0.0 | 6.0.x | Bun includes TS transpiler |
| OpenTelemetry SDK | 0.200.0 | 0.214.0+ | Core SDK for traces/metrics |
| @opentelemetry/api-logs | 0.200.0 | 0.214.0+ | LogRecord API for structured logs |
| instrumentation-pg | 0.60.0 | 0.66.0+ | PostgreSQL auto-instrumentation |
| PostgreSQL | 15.0 | 18.x | For database instrumentation |
| Drizzle ORM | 0.40.0 | 0.45.x | Type-safe SQL via node-postgres |
Instrumented Components
| Component | Method | What You Get |
|---|---|---|
| HTTP routes | Manual spans | Route-level traces with status codes |
| PostgreSQL | Auto (pg driver) | Query spans with statement and duration |
| Drizzle ORM | Auto (via pg) | All Drizzle queries appear as database spans |
| Business metrics | Custom counter | articles.created count |
| Logging | OTel LoggerProvider | Structured logs with trace/span correlation |
| Cross-service | Manual propagation | Distributed traces across Bun services |
Example Application
The complete working example is available at elysia-postgres. It includes two Elysia services (app + notify), PostgreSQL with Drizzle ORM, and a Scout Collector configuration.
Installation
Core Packages
Install the required OpenTelemetry and application packages with Bun:
bun add @opentelemetry/api
bun add @opentelemetry/sdk-node
bun add @opentelemetry/sdk-metrics
bun add @opentelemetry/exporter-trace-otlp-http
bun add @opentelemetry/exporter-metrics-otlp-http
bun add @opentelemetry/resources
bun add @opentelemetry/semantic-conventions
bun add @opentelemetry/instrumentation-pg
Logging Packages
bun add @opentelemetry/api-logs
bun add @opentelemetry/sdk-logs
bun add @opentelemetry/exporter-logs-otlp-http
Application Packages
bun add elysia
bun add drizzle-orm pg
bun add -d drizzle-kit @types/pg typescript
getNodeAutoInstrumentations()?Bun's runtime does not use Node.js's internal http module for its HTTP
server. The @opentelemetry/instrumentation-http package -- which
getNodeAutoInstrumentations() relies on -- monkey-patches Node's http
module and has no effect on Elysia/Bun request handling. Instead, use targeted
instrumentations like @opentelemetry/instrumentation-pg for database spans
and create manual spans for HTTP route handlers.
Tracing Setup
Create a tracing.ts file that initializes the OpenTelemetry SDK, metric
reader, and logger provider. This file runs before your application code
via Bun's --preload flag.
import { logs } from "@opentelemetry/api-logs";
import { OTLPLogExporter } from "@opentelemetry/exporter-logs-otlp-http";
import { OTLPMetricExporter } from "@opentelemetry/exporter-metrics-otlp-http";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
import { PgInstrumentation } from "@opentelemetry/instrumentation-pg";
import { resourceFromAttributes } from "@opentelemetry/resources";
import { BatchLogRecordProcessor, LoggerProvider } from "@opentelemetry/sdk-logs";
import { PeriodicExportingMetricReader } from "@opentelemetry/sdk-metrics";
import { NodeSDK } from "@opentelemetry/sdk-node";
import {
ATTR_SERVICE_NAME,
ATTR_SERVICE_VERSION,
} from "@opentelemetry/semantic-conventions";
const endpoint =
process.env.OTEL_EXPORTER_OTLP_ENDPOINT ?? "http://localhost:4318";
const resource = resourceFromAttributes({
[ATTR_SERVICE_NAME]: process.env.OTEL_SERVICE_NAME ?? "elysia-articles",
[ATTR_SERVICE_VERSION]: process.env.OTEL_SERVICE_VERSION ?? "1.0.0",
});
const sdk = new NodeSDK({
resource,
traceExporter: new OTLPTraceExporter({ url: `${endpoint}/v1/traces` }),
metricReader: new PeriodicExportingMetricReader({
exporter: new OTLPMetricExporter({ url: `${endpoint}/v1/metrics` }),
exportIntervalMillis: parseInt(
process.env.OTEL_METRIC_EXPORT_INTERVAL || "10000"
),
}),
instrumentations: [new PgInstrumentation({ requireParentSpan: true })],
});
sdk.start();
const loggerProvider = new LoggerProvider({
processors: [
new BatchLogRecordProcessor(
new OTLPLogExporter({ url: `${endpoint}/v1/logs` })
),
],
});
logs.setGlobalLoggerProvider(loggerProvider);
process.on("SIGTERM", async () => {
await loggerProvider.shutdown();
await sdk.shutdown();
process.exit(0);
});
Key details in this setup:
- OTLP HTTP exporters with explicit
/v1/traces,/v1/metrics, and/v1/logspaths -- Bun's fetch-based HTTP client works reliably with HTTP exporters (not gRPC) PgInstrumentationwithrequireParentSpan: trueso database spans only appear within the context of a request span, not from connection pool health checks- Separate
LoggerProviderbecause theNodeSDKlogRecordProcessoroption may not initialize correctly on all Bun versions -- setting the global logger provider explicitly is more reliable SIGTERMhandler for graceful shutdown in containerized deployments
Configuration
Environment Variables
OTEL_SERVICE_NAME=elysia-articles
OTEL_SERVICE_VERSION=1.0.0
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
OTEL_METRIC_EXPORT_INTERVAL=10000
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/elysia_articles
NOTIFY_URL=http://localhost:8081
PORT=8080
Custom OTel Logger
Instead of using Pino or another logging library, this setup uses the
OpenTelemetry Logs API directly. Every log entry is emitted as an OTel
LogRecord with automatic trace correlation, plus a JSON line to stdout
for local debugging.
import { trace, context as otelContext } from "@opentelemetry/api";
import { logs, SeverityNumber } from "@opentelemetry/api-logs";
const otelLogger = logs.getLogger("elysia-articles");
type LogAttrs = Record<string, string | number | boolean | undefined>;
function emit(
severityNumber: SeverityNumber,
severityText: string,
message: string,
attrs?: LogAttrs
) {
const span = trace.getActiveSpan();
const ctx = span?.spanContext();
otelLogger.emit({
severityNumber,
severityText,
body: message,
context: otelContext.active(),
attributes: {
...attrs,
...(ctx ? { trace_id: ctx.traceId, span_id: ctx.spanId } : {}),
},
});
const record: Record<string, unknown> = {
ts: new Date().toISOString(),
level: severityText,
msg: message,
...attrs,
...(ctx ? { trace_id: ctx.traceId, span_id: ctx.spanId } : {}),
};
const line = JSON.stringify(record);
if (severityNumber >= SeverityNumber.ERROR) {
process.stderr.write(`${line}\n`);
} else {
process.stdout.write(`${line}\n`);
}
}
export const logger = {
info: (msg: string, attrs?: LogAttrs) =>
emit(SeverityNumber.INFO, "INFO", msg, attrs),
warn: (msg: string, attrs?: LogAttrs) =>
emit(SeverityNumber.WARN, "WARN", msg, attrs),
error: (msg: string, attrs?: LogAttrs) =>
emit(SeverityNumber.ERROR, "ERROR", msg, attrs),
};
This approach has two advantages over Pino on Bun: it avoids the
pino-opentelemetry-transport worker thread (which has inconsistent behavior
on Bun), and it emits LogRecord objects with proper context for automatic
trace/span ID correlation in Scout.
Database with Drizzle ORM
Drizzle ORM uses the node-postgres (pg) adapter, which means every query
flows through a pg.Pool instance. The PgInstrumentation in tracing.ts
hooks into that pool to generate database spans automatically.
import { pgTable, serial, text, timestamp, varchar } from "drizzle-orm/pg-core";
export const articles = pgTable("articles", {
id: serial().primaryKey(),
title: varchar({ length: 255 }).notNull(),
body: text().notNull(),
createdAt: timestamp("created_at", { precision: 3 }).notNull().defaultNow(),
updatedAt: timestamp("updated_at", { precision: 3 }).notNull().defaultNow(),
});
import { drizzle } from "drizzle-orm/node-postgres";
import { Pool } from "pg";
import * as schema from "./schema";
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
export const db = drizzle(pool, { schema });
Because tracing.ts is preloaded before db.ts is imported, the pg module
is already instrumented when the pool is created. Every Drizzle query --
db.select(), db.insert(), db.update(), db.delete() -- produces a
pg.query span with the SQL statement and execution time.
Docker Compose
The full development stack with both Elysia services, PostgreSQL, and the Scout Collector:
services:
app:
build: ./app
ports:
- "8080:8080"
environment:
PORT: "8080"
DATABASE_URL: postgresql://postgres:postgres@db:5432/elysia_articles
NOTIFY_URL: http://notify:8081
OTEL_SERVICE_NAME: elysia-articles
OTEL_EXPORTER_OTLP_ENDPOINT: http://otel-collector:4318
OTEL_METRIC_EXPORT_INTERVAL: "10000"
OTEL_RESOURCE_ATTRIBUTES: deployment.environment=${SCOUT_ENVIRONMENT:-development},service.namespace=examples
depends_on:
db:
condition: service_healthy
otel-collector:
condition: service_started
healthcheck:
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"--spider",
"http://localhost:8080/api/health",
]
interval: 10s
timeout: 5s
retries: 10
start_period: 20s
notify:
build: ./notify
ports:
- "8081:8081"
environment:
PORT: "8081"
OTEL_SERVICE_NAME: elysia-notify
OTEL_EXPORTER_OTLP_ENDPOINT: http://otel-collector:4318
OTEL_METRIC_EXPORT_INTERVAL: "10000"
OTEL_RESOURCE_ATTRIBUTES: deployment.environment=${SCOUT_ENVIRONMENT:-development},service.namespace=examples
depends_on:
otel-collector:
condition: service_started
healthcheck:
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"--spider",
"http://localhost:8081/api/health",
]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
db:
image: postgres:18-alpine
environment:
POSTGRES_DB: elysia_articles
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 3s
retries: 10
otel-collector:
image: otel/opentelemetry-collector-contrib:0.148.0
command: ["--config=/etc/otel/config.yaml"]
volumes:
- ./config/otel-config.yaml:/etc/otel/config.yaml:ro
ports:
- "4317:4317"
- "4318:4318"
- "13133:13133"
environment:
SCOUT_ENDPOINT: ${SCOUT_ENDPOINT:-http://localhost:4318}
SCOUT_CLIENT_ID: ${SCOUT_CLIENT_ID:-}
SCOUT_CLIENT_SECRET: ${SCOUT_CLIENT_SECRET:-}
SCOUT_TOKEN_URL: ${SCOUT_TOKEN_URL:-http://localhost/token}
SCOUT_ENVIRONMENT: ${SCOUT_ENVIRONMENT:-development}
healthcheck:
test: ["NONE"]
volumes:
pgdata:
Scout Collector Integration
The collector uses OAuth2 authentication to forward telemetry to Scout. Set
these environment variables before running docker compose up:
export SCOUT_ENDPOINT=https://your-scout-endpoint.base14.io
export SCOUT_CLIENT_ID=your-client-id
export SCOUT_CLIENT_SECRET=your-client-secret
export SCOUT_TOKEN_URL=https://auth.base14.io/oauth/token
export SCOUT_ENVIRONMENT=production
The collector configuration uses the oauth2client extension for
authentication, batch processor for efficient export, and memory_limiter
for safety. Health check spans are filtered out via the filter/noisy
processor to reduce noise.
Production Configuration
Production Environment Variables
OTEL_SERVICE_NAME=elysia-articles
OTEL_SERVICE_VERSION=1.2.0
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318
OTEL_METRIC_EXPORT_INTERVAL=60000
OTEL_RESOURCE_ATTRIBUTES=deployment.environment=production,service.namespace=articles
DATABASE_URL=postgresql://app_user:secure_password@db-primary:5432/articles_prod
NOTIFY_URL=http://notify:8081
PORT=8080
In production, increase OTEL_METRIC_EXPORT_INTERVAL to 60000 (60 seconds)
to reduce metric export frequency and collector load.
Dockerfile
Both services use a multi-stage build with oven/bun:1.3-alpine for minimal
image size. The --preload flag in the CMD ensures tracing initializes
before the application.
FROM oven/bun:1.3-alpine AS deps
WORKDIR /app
COPY package.json bun.lock* ./
RUN bun install --frozen-lockfile
FROM oven/bun:1.3-alpine
RUN addgroup -g 1001 -S appgroup && adduser -S appuser -u 1001 -G appgroup
WORKDIR /app
COPY /app/node_modules ./node_modules
COPY package.json ./
COPY src ./src/
RUN chown -R appuser:appgroup /app
USER appuser
HEALTHCHECK \
CMD wget --no-verbose --tries=1 --spider http://localhost:8080/api/health || exit 1
EXPOSE 8080
CMD ["bun", "run", "--preload", "./src/tracing.ts", "./src/index.ts"]
The oven/bun:1.3-alpine image is roughly 100MB -- significantly smaller
than most Node.js images. Bun's built-in TypeScript transpiler means no
separate build step is needed.
Multi-Service Tracing
The example application consists of two Bun services that communicate via
HTTP. Trace context flows from the app service to the notify service through
W3C traceparent headers, creating a single distributed trace across both
services.
Outgoing side -- inject trace context into fetch headers:
import { context, propagation } from "@opentelemetry/api";
import { logger } from "./logger";
const notifyUrl = process.env.NOTIFY_URL ?? "http://localhost:8081";
export async function notifyArticleCreated(articleId: number, title: string) {
const headers: Record<string, string> = { "Content-Type": "application/json" };
propagation.inject(context.active(), headers);
try {
const res = await fetch(`${notifyUrl}/notify`, {
method: "POST",
headers,
body: JSON.stringify({ event: "article.created", article_id: articleId, title }),
});
if (!res.ok) {
logger.warn("Notify service returned non-OK", { status: res.status });
}
} catch (err) {
logger.warn("Notify service unreachable", { error: String(err) });
}
}
propagation.inject() writes the traceparent and tracestate headers into
the plain object. Bun's native fetch sends these headers to the downstream
service.
Incoming side -- extract trace context and create a child span:
import { Elysia, t } from "elysia";
import { trace, SpanKind, context, propagation } from "@opentelemetry/api";
import { logger } from "./logger";
const tracer = trace.getTracer("elysia-notify");
const PORT = parseInt(process.env.PORT || "8081");
const app = new Elysia()
.get("/api/health", () => ({ status: "healthy", service: "elysia-notify" }))
.post("/notify", async ({ body, request }) => {
const carrier: Record<string, string> = {};
request.headers.forEach((value, key) => {
carrier[key] = value;
});
const parentCtx = propagation.extract(context.active(), carrier);
return trace.getTracer("elysia-notify").startActiveSpan(
"POST /notify",
{ kind: SpanKind.SERVER },
parentCtx,
async (span) => {
logger.info("Notification received", {
event: String(body.event),
article_id: Number(body.article_id),
});
span.setAttribute("notification.event", String(body.event));
span.setAttribute("notification.article_id", Number(body.article_id));
span.end();
return { status: "received" };
}
);
})
.listen(PORT);
logger.info("Notify service started", { port: PORT });
The notify service converts request.headers (a Headers object) into a
plain object so propagation.extract() can read the traceparent header.
The extracted context is passed as the third argument to startActiveSpan,
creating a child span that links back to the originating request in the app
service.
Elysia-Specific Features
Plugin System
Elysia uses a plugin-based architecture where route groups are defined as
separate Elysia instances and composed with .use():
import { Elysia } from "elysia";
import { trace, SpanKind, SpanStatusCode } from "@opentelemetry/api";
import { logger } from "./logger";
import { healthRoutes } from "./routes/health";
import { articleRoutes } from "./routes/article";
const tracer = trace.getTracer("elysia-articles");
const PORT = parseInt(process.env.PORT || "8080");
const app = new Elysia()
.onError(({ code, error, set, request }) => {
const url = new URL(request.url);
return tracer.startActiveSpan(
`${request.method} ${url.pathname}`,
{ kind: SpanKind.SERVER },
(span) => {
if (code === "VALIDATION") {
logger.warn("Validation failed", { path: url.pathname });
span.setAttribute("http.response.status_code", 422);
span.setStatus({ code: SpanStatusCode.ERROR, message: "Validation failed" });
span.end();
set.status = 422;
return {
error: "Validation failed",
details: error.message,
meta: { trace_id: span.spanContext().traceId },
};
}
logger.error("Unhandled error", { error: String(error) });
span.setAttribute("http.response.status_code", 500);
span.setStatus({ code: SpanStatusCode.ERROR, message: String(error) });
span.end();
set.status = 500;
return {
error: "Internal server error",
meta: { trace_id: span.spanContext().traceId },
};
}
);
})
.use(healthRoutes)
.use(articleRoutes)
.listen(PORT);
logger.info("Elysia articles server started", { port: PORT });
The onError hook creates a span for every unhandled error, capturing the
HTTP method, path, and status code. Validation errors from Elysia's built-in
t.Object() validators return 422 with a trace ID in the response body.
Type-Safe Routes with Validation
Elysia provides compile-time type inference from runtime validators. When you
define a body schema with t.Object(), both request validation and TypeScript
types are derived from a single source:
.post(
"/",
async ({ body, set }) =>
traced("POST /api/articles", set, async () => {
const [article] = await db
.insert(articles)
.values({ title: body.title, body: body.body })
.returning();
// body.title and body.body are type-checked at compile time
set.status = 201;
return { data: article, meta: { trace_id: getTraceId() } };
}),
{
body: t.Object({
title: t.String({ minLength: 1 }),
body: t.String({ minLength: 1 }),
}),
}
)
If validation fails, Elysia throws a VALIDATION error that the onError
hook captures and wraps in a span (see above).
The traced() Wrapper Pattern
Since Elysia on Bun cannot use HTTP auto-instrumentation, every route handler
is wrapped with a traced() function that manages span lifecycle:
const tracer = trace.getTracer("elysia-articles");
function traced<T>(
name: string,
set: { status?: number | string },
fn: () => Promise<T>
): Promise<T> {
return tracer.startActiveSpan(name, { kind: SpanKind.SERVER }, async (span) => {
try {
const result = await fn();
const status = typeof set.status === "number" ? set.status : 200;
span.setAttribute("http.response.status_code", status);
if (status >= 400) span.setStatus({ code: SpanStatusCode.ERROR });
span.end();
return result;
} catch (err) {
span.setAttribute("http.response.status_code", 500);
span.setStatus({ code: SpanStatusCode.ERROR, message: String(err) });
span.end();
throw err;
}
});
}
This wrapper:
- Creates a
SERVERspan with the route name (e.g.,GET /api/articles) - Reads the response status code from Elysia's
setobject after the handler completes - Marks spans as
ERRORfor 4xx and 5xx responses - Ensures
span.end()is called in both success and error paths - Propagates the active context so that database queries within
fn()become child spans
Drizzle ORM Auto-Tracing via PgInstrumentation
Because Drizzle ORM uses the pg driver internally, all database operations
are automatically instrumented. A single route handler like this:
const [rows, [{ total }]] = await Promise.all([
db
.select()
.from(articles)
.orderBy(desc(articles.createdAt))
.limit(perPage)
.offset(offset),
db.select({ total: count() }).from(articles),
]);
Generates a span hierarchy like:
GET /api/articles (SERVER)
├── pg.query:SELECT (CLIENT) — article rows
└── pg.query:SELECT (CLIENT) — count query
Each pg.query span includes the SQL statement (with parameter values
obfuscated by default), execution duration, database name, and host.
onError Hook
The global onError hook ensures that even failed requests produce spans
with meaningful error information:
.onError(({ code, error, set, request }) => {
const url = new URL(request.url);
return tracer.startActiveSpan(
`${request.method} ${url.pathname}`,
{ kind: SpanKind.SERVER },
(span) => {
if (code === "VALIDATION") {
span.setAttribute("http.response.status_code", 422);
span.setStatus({ code: SpanStatusCode.ERROR, message: "Validation failed" });
span.end();
set.status = 422;
return { error: "Validation failed", details: error.message };
}
// ... handle other errors
}
);
})
Elysia's error codes (VALIDATION, NOT_FOUND, INTERNAL_SERVER_ERROR,
PARSE) let you distinguish error types in span attributes for targeted
alerting.
Custom Instrumentation
Business Metrics with Counters
Track application-level metrics alongside trace data. The articles.created
counter increments every time a new article is successfully inserted:
import { metrics } from "@opentelemetry/api";
const meter = metrics.getMeter("elysia-articles");
const articlesCreated = meter.createCounter("articles.created", {
description: "Number of articles created",
});
// Inside the POST handler:
const [article] = await db
.insert(articles)
.values({ title: body.title, body: body.body })
.returning();
articlesCreated.add(1);
This counter is exported via the PeriodicExportingMetricReader configured
in tracing.ts and appears in Scout as a time-series metric.
Trace ID in Responses
Every API response includes a trace_id field so callers can reference the
exact trace when reporting issues:
function getTraceId(): string {
return trace.getActiveSpan()?.spanContext().traceId ?? "";
}
// Used in responses:
return { data: article, meta: { trace_id: getTraceId() } };
This is especially useful during development and debugging -- the trace ID links directly to the full distributed trace in Scout.
Manual Spans with startActiveSpan
For operations that need additional detail beyond what traced() provides,
create nested spans directly:
import { trace, SpanKind } from "@opentelemetry/api";
const tracer = trace.getTracer("elysia-articles");
async function enrichArticle(articleId: number) {
return tracer.startActiveSpan(
"enrichArticle",
{ kind: SpanKind.INTERNAL },
async (span) => {
span.setAttribute("article.id", articleId);
try {
const result = await performEnrichment(articleId);
span.end();
return result;
} catch (err) {
span.recordException(err as Error);
span.setStatus({
code: SpanStatusCode.ERROR,
message: (err as Error).message,
});
span.end();
throw err;
}
}
);
}
When called inside a traced() handler, this span becomes a child of the
route span, creating a detailed breakdown of the request processing steps.
Complete Route Example
Here is the full article routes file with all instrumentation patterns
combined -- the traced() wrapper, business counter, trace ID in responses,
type-safe validation, and notification with context propagation:
import { Elysia, t } from "elysia";
import { eq, desc, count, sql } from "drizzle-orm";
import { trace, SpanKind, SpanStatusCode } from "@opentelemetry/api";
import { metrics } from "@opentelemetry/api";
import { db } from "../db";
import { articles } from "../schema";
import { logger } from "../logger";
import { notifyArticleCreated } from "../notification";
const tracer = trace.getTracer("elysia-articles");
const meter = metrics.getMeter("elysia-articles");
const articlesCreated = meter.createCounter("articles.created", {
description: "Number of articles created",
});
function getTraceId(): string {
return trace.getActiveSpan()?.spanContext().traceId ?? "";
}
function traced<T>(
name: string,
set: { status?: number | string },
fn: () => Promise<T>
): Promise<T> {
return tracer.startActiveSpan(name, { kind: SpanKind.SERVER }, async (span) => {
try {
const result = await fn();
const status = typeof set.status === "number" ? set.status : 200;
span.setAttribute("http.response.status_code", status);
if (status >= 400) span.setStatus({ code: SpanStatusCode.ERROR });
span.end();
return result;
} catch (err) {
span.setAttribute("http.response.status_code", 500);
span.setStatus({ code: SpanStatusCode.ERROR, message: String(err) });
span.end();
throw err;
}
});
}
export const articleRoutes = new Elysia({ prefix: "/api/articles" })
.get("/", async ({ query, set }) =>
traced("GET /api/articles", set, async () => {
const page = Number(query.page) || 1;
const perPage = Number(query.per_page) || 20;
const offset = (page - 1) * perPage;
const [rows, [{ total }]] = await Promise.all([
db
.select()
.from(articles)
.orderBy(desc(articles.createdAt))
.limit(perPage)
.offset(offset),
db.select({ total: count() }).from(articles),
]);
logger.info("Listed articles", { page, per_page: perPage, total });
return {
data: rows,
meta: {
page,
per_page: perPage,
total,
trace_id: getTraceId(),
},
};
})
)
.post(
"/",
async ({ body, set }) =>
traced("POST /api/articles", set, async () => {
const [article] = await db
.insert(articles)
.values({ title: body.title, body: body.body })
.returning();
articlesCreated.add(1);
logger.info("Article created", { id: article.id, title: article.title });
await notifyArticleCreated(article.id, article.title);
set.status = 201;
return { data: article, meta: { trace_id: getTraceId() } };
}),
{
body: t.Object({
title: t.String({ minLength: 1 }),
body: t.String({ minLength: 1 }),
}),
}
)
.get("/:id", async ({ params, set }) =>
traced("GET /api/articles/:id", set, async () => {
const id = Number(params.id);
if (isNaN(id) || !Number.isInteger(id) || id < 1) {
logger.warn("Invalid article ID format", { raw_id: params.id });
set.status = 400;
return {
error: "Invalid ID format",
details: "ID must be a positive integer",
meta: { trace_id: getTraceId() },
};
}
const [article] = await db
.select()
.from(articles)
.where(eq(articles.id, id));
if (!article) {
logger.warn("Article not found", { id });
set.status = 404;
return { error: "Article not found", meta: { trace_id: getTraceId() } };
}
return { data: article, meta: { trace_id: getTraceId() } };
})
)
.put(
"/:id",
async ({ params, body, set }) =>
traced("PUT /api/articles/:id", set, async () => {
const id = Number(params.id);
if (isNaN(id) || !Number.isInteger(id) || id < 1) {
set.status = 400;
return {
error: "Invalid ID format",
meta: { trace_id: getTraceId() },
};
}
const updates: Record<string, unknown> = {
updatedAt: new Date(),
};
if (body.title) updates.title = body.title;
if (body.body) updates.body = body.body;
const [article] = await db
.update(articles)
.set(updates)
.where(eq(articles.id, id))
.returning();
if (!article) {
logger.warn("Article not found for update", { id });
set.status = 404;
return { error: "Article not found", meta: { trace_id: getTraceId() } };
}
logger.info("Article updated", { id });
return { data: article, meta: { trace_id: getTraceId() } };
}),
{
body: t.Partial(
t.Object({
title: t.String({ minLength: 1 }),
body: t.String({ minLength: 1 }),
})
),
}
)
.delete("/:id", async ({ params, set }) =>
traced("DELETE /api/articles/:id", set, async () => {
const id = Number(params.id);
if (isNaN(id) || !Number.isInteger(id) || id < 1) {
set.status = 400;
return { error: "Invalid ID format", meta: { trace_id: getTraceId() } };
}
const [article] = await db
.delete(articles)
.where(eq(articles.id, id))
.returning();
if (!article) {
logger.warn("Article not found for delete", { id });
set.status = 404;
return { error: "Article not found", meta: { trace_id: getTraceId() } };
}
logger.info("Article deleted", { id });
set.status = 204;
})
);
Running Your Application
- Development
- Docker Compose
Start the application in development mode with watch and preload:
bun run --watch --preload ./src/tracing.ts ./src/index.ts
Or use the package.json scripts:
bun run dev
docker compose up --build
docker compose logs -f app notify
docker compose down
Verification
After starting the application, create an article and verify spans are generated:
curl -X POST http://localhost:8080/api/articles \
-H "Content-Type: application/json" \
-d '{"title": "Hello Elysia", "body": "First post with OpenTelemetry tracing"}'
Expected response:
{
"data": {
"id": 1,
"title": "Hello Elysia",
"body": "First post with OpenTelemetry tracing",
"createdAt": "2026-03-31T10:00:00.000Z",
"updatedAt": "2026-03-31T10:00:00.000Z"
},
"meta": {
"trace_id": "a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4"
}
}
The trace_id in the response maps to this span hierarchy in Scout:
elysia-articles: POST /api/articles (SERVER, 201)
├── pg.query:INSERT (CLIENT) — insert article
└── elysia-notify: POST /notify (SERVER)
└── [notification processing]
List articles to verify pagination and database spans:
curl http://localhost:8080/api/articles?page=1&per_page=10
Check the health endpoint:
curl http://localhost:8080/api/health
Troubleshooting
Common Issues
Issue: Preload not loading tracing.ts
Symptoms: No spans or metrics appear. Application starts normally but the collector receives no data.
Solutions:
-
Verify the
--preloadflag is before the entry point in the command:# Correct
bun run --preload ./src/tracing.ts ./src/index.ts
# Wrong — preload after entry point is ignored
bun run ./src/index.ts --preload ./src/tracing.ts -
Check that
tracing.tsdoes not import any application modules (it should only import@opentelemetry/*packages) -
Confirm the file path is relative to the working directory, not the
srcfolder
Issue: PgInstrumentation not capturing database spans
Symptoms: Route spans appear but no pg.query child spans.
Solutions:
- Ensure
tracing.tsis preloaded beforepgis imported. The instrumentation must patchpgbefore anyPoolorClientis created - Verify you are using
pg(node-postgres), notpostgres(postgres.js) --PgInstrumentationonly supports thepgpackage - Check that
requireParentSpan: trueis not filtering out spans -- try setting it tofalsetemporarily to confirm spans appear
Issue: Manual context propagation not linking traces
Symptoms: The app service and notify service produce separate, unlinked traces instead of a single distributed trace.
Solutions:
-
Verify
propagation.inject()is called within an active span context. If called outside atraced()wrapper, there is no active context to propagate -
On the receiving side, convert
request.headersto a plain object before callingpropagation.extract()-- the W3C propagator expects a simple key-value carrier, not aHeadersinstance -
Pass the extracted context as the third argument to
startActiveSpan:const parentCtx = propagation.extract(context.active(), carrier);
tracer.startActiveSpan("span-name", { kind: SpanKind.SERVER }, parentCtx, (span) => {
// ...
});
Issue: OTLP export failures on Bun
Symptoms: Console shows connection errors or timeout warnings from OTLP exporters.
Solutions:
- Use HTTP exporters (port 4318), not gRPC (port 4317). Bun does not support gRPC natively
- Verify the endpoint URL includes the signal-specific path:
http://collector:4318/v1/traces(not justhttp://collector:4318) - Check that the collector is running and reachable from the Bun process. In Docker Compose, use the service name as hostname
Issue: LoggerProvider not emitting logs
Symptoms: Trace and metric data appears in Scout but no log records.
Solutions:
- Ensure
logs.setGlobalLoggerProvider(loggerProvider)is called intracing.tsafter creating theLoggerProvider - Verify the logger calls
otelLogger.emit()with thecontextfield set tootelContext.active()for trace correlation - Check that the log exporter URL ends with
/v1/logs
Debug Mode
Enable verbose SDK logging to diagnose initialization issues:
OTEL_LOG_LEVEL=debug bun run --preload ./src/tracing.ts ./src/index.ts
Security Considerations
SQL Query Obfuscation
PgInstrumentation obfuscates SQL parameter values by default. Query
statements appear in spans as:
INSERT INTO "articles" ("title", "body") VALUES ($1, $2) RETURNING *
Parameter values ($1, $2) are never captured in span attributes. To
verify this behavior, avoid setting enhancedDatabaseReporting: true in
production:
// Safe default — parameter values are obfuscated
new PgInstrumentation({ requireParentSpan: true })
// AVOID in production — captures actual parameter values
new PgInstrumentation({ enhancedDatabaseReporting: true })
PII Protection
Prevent sensitive data from leaking into telemetry:
// BAD: Captures user email in span
span.setAttribute("user.email", email);
// GOOD: Only capture non-sensitive identifiers
span.setAttribute("user.id", userId);
span.setAttribute("user.email_domain", email.split("@")[1]);
For the custom OTel logger, be cautious with structured attributes:
// BAD: Logs request body that might contain passwords
logger.info("Request received", { body: JSON.stringify(req.body) });
// GOOD: Log only safe identifiers
logger.info("Article created", { id: article.id, title: article.title });
Compliance Considerations
For GDPR, HIPAA, or PCI-DSS compliance:
- Never log PII in span attributes or log record attributes
- Use pseudonymization for user identifiers when possible
- Configure data retention policies in your observability backend
- Implement attribute filtering at the collector level using the
transformprocessor
Performance Considerations
Bun Runtime Advantages
Bun's startup time is typically 3-5x faster than Node.js, which means the
overhead of preloading tracing.ts is minimal -- usually under 100ms.
The OpenTelemetry SDK initialization adds roughly 50-80ms to cold start,
compared to 150-300ms on Node.js.
Expected Impact
| Metric | Typical Impact | High-Traffic Impact |
|---|---|---|
| Latency | +1-2ms | +2-4ms |
| CPU overhead | 2-4% | 4-8% |
| Memory | +30-60MB | +60-120MB |
Bun's lower baseline memory usage means the absolute overhead of OpenTelemetry is smaller than on Node.js.
Batch Export Tuning
The PeriodicExportingMetricReader and BatchLogRecordProcessor buffer data
before export. Adjust these for your traffic patterns:
// Development: frequent exports for fast feedback
metricReader: new PeriodicExportingMetricReader({
exporter: new OTLPMetricExporter({ url: `${endpoint}/v1/metrics` }),
exportIntervalMillis: 10000, // every 10 seconds
}),
// Production: less frequent exports to reduce overhead
metricReader: new PeriodicExportingMetricReader({
exporter: new OTLPMetricExporter({ url: `${endpoint}/v1/metrics` }),
exportIntervalMillis: 60000, // every 60 seconds
}),
Skip Health Check Spans
Health check endpoints generate high-volume, low-value spans. The collector
configuration filters these out with the filter/noisy processor:
filter/noisy:
error_mode: ignore
traces:
span:
- 'IsMatch(name, ".*health.*")'
This keeps health checks functional for container orchestrators while preventing span noise in Scout.
FAQ
Does OpenTelemetry work with Bun?
Yes. Bun supports the OpenTelemetry Node.js SDK (@opentelemetry/sdk-node)
through its Node.js compatibility layer. The NodeSDK class, OTLP HTTP
exporters, and targeted instrumentations like @opentelemetry/instrumentation-pg
work correctly. However, getNodeAutoInstrumentations() does not fully work
because Bun does not use Node's internal http, net, dns, or fs
modules for its core operations.
Why do I need manual spans instead of auto-instrumentation?
Bun's HTTP server does not go through Node.js's http.createServer(), so
@opentelemetry/instrumentation-http has nothing to patch. The traced()
wrapper pattern gives you explicit control over span names (e.g.,
GET /api/articles/:id instead of generic HTTP GET) and lets you
set response status codes from Elysia's set object.
How does Drizzle ORM get instrumented without explicit setup?
Drizzle ORM with the drizzle-orm/node-postgres adapter delegates all SQL
execution to a pg.Pool instance. The PgInstrumentation patches the pg
module at the driver level, so every query that flows through the pool --
whether from Drizzle's query builder, raw SQL, or transactions -- generates
a pg.query span automatically.
How do I propagate trace context between Bun services?
On the sending side, call propagation.inject(context.active(), headers)
to write traceparent and tracestate headers into a plain object, then
pass that object to fetch. On the receiving side, extract headers into a
plain object and call propagation.extract(context.active(), carrier) to
get the parent context. Pass this context to startActiveSpan as the third
argument.
What is the difference between OTLP HTTP and gRPC on Bun?
Use OTLP HTTP (port 4318). Bun does not have native gRPC support, and the
@grpc/grpc-js package has compatibility issues on Bun. HTTP exporters work
reliably with Bun's native fetch implementation and support HTTP proxies
and load balancers.
How do I use Drizzle ORM vs Prisma with OpenTelemetry on Bun?
Drizzle ORM with node-postgres works well because PgInstrumentation
patches the underlying pg driver. Prisma uses its own query engine binary,
which bypasses the pg driver entirely -- PgInstrumentation cannot capture
Prisma queries. If you use Prisma, you need Prisma's built-in tracing
integration (previewFeatures = ["tracing"]) or its OpenTelemetry extension.
Can I use Pino logging instead of the custom OTel logger?
You can, but the pino-opentelemetry-transport package uses Node.js worker
threads, which have inconsistent behavior on Bun. The custom OTel logger
approach in this guide uses @opentelemetry/api-logs directly, avoiding
worker threads entirely while providing the same trace correlation and
structured log export.
How do I add custom attributes to all spans?
Set OTEL_RESOURCE_ATTRIBUTES as an environment variable:
OTEL_RESOURCE_ATTRIBUTES=deployment.environment=production,service.namespace=articles
These attributes are added to the resource and appear on every span, metric, and log record exported by the service.
What happens if the collector is unavailable?
The OTLP HTTP exporters fail silently -- your application continues to handle requests normally. Spans, metrics, and logs are buffered in memory and dropped when the buffer is full. When the collector comes back online, new telemetry data is exported normally. There is no automatic retry of dropped data.
How do I monitor multiple Elysia services in a single trace?
Each service needs its own tracing.ts with a unique OTEL_SERVICE_NAME.
Use propagation.inject() on outgoing requests and propagation.extract()
on incoming requests to link spans across services. The example in this
guide demonstrates this with the elysia-articles and elysia-notify
services.
What's Next
Advanced Topics
- Express.js Instrumentation - Node.js auto-instrumentation patterns for comparison
- Hono Instrumentation - Another lightweight framework with OpenTelemetry
- Fastify Instrumentation - Plugin-based Node.js framework
Scout Platform Features
- Creating Alerts - Set up alerting for Elysia services
- Dashboard Creation - Build custom dashboards for Bun service metrics
Deployment and Operations
- Docker Compose Setup - Local collector configuration
- Kubernetes Helm Setup - Production deployment
Complete Example
Project Structure
elysia-postgres/
├── app/
│ ├── src/
│ │ ├── tracing.ts # OTel SDK initialization (preloaded)
│ │ ├── index.ts # Elysia app entry point
│ │ ├── logger.ts # Custom OTel logger with stdout mirror
│ │ ├── db.ts # Drizzle + pg pool
│ │ ├── schema.ts # Drizzle table schema
│ │ ├── notification.ts # Outgoing fetch with propagation.inject
│ │ └── routes/
│ │ ├── article.ts # Article CRUD with traced() wrapper
│ │ └── health.ts # Health check endpoint
│ ├── Dockerfile # oven/bun:1.3-alpine multi-stage
│ ├── package.json
│ └── tsconfig.json
├── notify/
│ ├── src/
│ │ ├── tracing.ts # OTel SDK for notify service
│ │ ├── index.ts # Notify service with propagation.extract
│ │ └── logger.ts # Shared logger pattern
│ ├── Dockerfile
│ └── package.json
├── config/
│ └── otel-config.yaml # Scout Collector configuration
├── db/
│ └── init.sql # PostgreSQL schema
├── compose.yml # Full development stack
└── README.md
Running the Example
git clone https://github.com/base-14/examples.git
cd examples/bun/elysia-postgres
docker compose up --build
Testing
# Create an article
curl -X POST http://localhost:8080/api/articles \
-H "Content-Type: application/json" \
-d '{"title": "Test Article", "body": "Testing OpenTelemetry with Elysia on Bun"}'
# List articles
curl http://localhost:8080/api/articles
# Get a specific article
curl http://localhost:8080/api/articles/1
# Update an article
curl -X PUT http://localhost:8080/api/articles/1 \
-H "Content-Type: application/json" \
-d '{"title": "Updated Title"}'
# Delete an article
curl -X DELETE http://localhost:8080/api/articles/1
# Health check
curl http://localhost:8080/api/health
Dependencies
{
"name": "elysia-postgres-app",
"version": "1.0.0",
"private": true,
"scripts": {
"start": "bun run --preload ./src/tracing.ts ./src/index.ts",
"dev": "bun run --watch --preload ./src/tracing.ts ./src/index.ts"
},
"dependencies": {
"elysia": "^1.4.28",
"drizzle-orm": "^0.45.2",
"pg": "^8.20.0",
"@opentelemetry/api": "^1.9.1",
"@opentelemetry/api-logs": "^0.214.0",
"@opentelemetry/sdk-node": "^0.214.0",
"@opentelemetry/sdk-metrics": "^2.6.1",
"@opentelemetry/sdk-logs": "^0.214.0",
"@opentelemetry/exporter-trace-otlp-http": "^0.214.0",
"@opentelemetry/exporter-metrics-otlp-http": "^0.214.0",
"@opentelemetry/exporter-logs-otlp-http": "^0.214.0",
"@opentelemetry/instrumentation-pg": "^0.66.0",
"@opentelemetry/resources": "^2.6.1",
"@opentelemetry/semantic-conventions": "^1.40.0"
},
"devDependencies": {
"drizzle-kit": "^0.31.10",
"@types/pg": "^8.20.0",
"typescript": "^6.0.2"
}
}
GitHub Repository
For the complete working example, see the Elysia PostgreSQL Example repository.
References
- Official OpenTelemetry Node.js Documentation
- Elysia Documentation
- Bun Documentation
- Drizzle ORM Documentation
- @opentelemetry/instrumentation-pg
- OpenTelemetry Logs API
Related Guides
- Express.js Instrumentation - Classic Node.js framework
- Hono Instrumentation - Lightweight Node.js/Bun framework
- Node.js Instrumentation - Generic Node.js setup
- Fastify Instrumentation - Plugin-based Node.js framework
- Docker Compose Setup - Local collector configuration