OpenSearch
OpenSearch exposes Prometheus-format metrics at /_prometheus/metrics
when the
prometheus-exporter plugin
is installed. The OpenTelemetry Collector scrapes this endpoint using
the Prometheus receiver, collecting 230+ metrics across cluster health,
index and search performance, JVM runtime, OS resources, and storage
I/O. This guide installs the plugin, configures the receiver, and
ships metrics to base14 Scout.
Prerequisites
| Requirement | Minimum | Recommended |
|---|---|---|
| OpenSearch | 2.0 | 3.5+ |
| prometheus-exporter | match | match |
| OTel Collector Contrib | 0.90.0 | latest |
| base14 Scout | Any | — |
Before starting:
- OpenSearch HTTP port (9200) must be accessible from the host running the Collector
- The prometheus-exporter plugin version must match your OpenSearch version exactly (e.g., 3.5.0 needs plugin 3.5.0.0)
- OTel Collector installed — see Docker Compose Setup
What You'll Monitor
- Cluster Health: cluster status, node and datanode count, shard distribution, pending tasks, disk watermark thresholds
- Index & Search Performance: indexing rate, search query and fetch latency, merge operations, refresh and flush rates, scroll contexts
- JVM Runtime: heap and non-heap memory, GC collection count and duration, buffer pools, thread counts, class loading
- OS & Process: CPU percent, memory usage, load averages, file descriptors, swap usage
- Storage & I/O: filesystem capacity, read and write bytes, I/O operations, translog size, store size
- Caches: query cache hits and misses, request cache evictions, fielddata memory and evictions
Full metric list: install the plugin and run
curl -s http://localhost:9200/_prometheus/metrics | grep "^# TYPE"
against your OpenSearch instance.
Access Setup
The prometheus-exporter plugin is not bundled with OpenSearch. Install it on every node in the cluster:
# Plugin version must match your OpenSearch version exactly
bin/opensearch-plugin install \
https://github.com/opensearch-project/opensearch-prometheus-exporter/releases/download/3.5.0.0/prometheus-exporter-3.5.0.0.zip
Restart the node after installation. Verify the plugin is active:
curl -s http://localhost:9200/_cat/plugins | grep prometheus
For Docker deployments, build a custom image with the plugin pre-installed:
FROM opensearchproject/opensearch:3.5.0
RUN /usr/share/opensearch/bin/opensearch-plugin install -b \
https://github.com/opensearch-project/opensearch-prometheus-exporter/releases/download/3.5.0.0/prometheus-exporter-3.5.0.0.zip
No authentication is required when the security plugin is disabled. For secured clusters, see Authentication below.
Configuration
receivers:
prometheus:
config:
scrape_configs:
- job_name: opensearch
scrape_interval: 30s
metrics_path: /_prometheus/metrics
static_configs:
- targets:
- ${env:OPENSEARCH_HOST}:9200
processors:
resource:
attributes:
- key: environment
value: ${env:ENVIRONMENT}
action: upsert
- key: service.name
value: ${env:SERVICE_NAME}
action: upsert
batch:
timeout: 10s
send_batch_size: 1024
# Export to base14 Scout
exporters:
otlphttp/b14:
endpoint: ${env:OTEL_EXPORTER_OTLP_ENDPOINT}
tls:
insecure_skip_verify: true
service:
pipelines:
metrics:
receivers: [prometheus]
processors: [resource, batch]
exporters: [otlphttp/b14]
Environment Variables
OPENSEARCH_HOST=localhost
ENVIRONMENT=your_environment
SERVICE_NAME=your_service_name
OTEL_EXPORTER_OTLP_ENDPOINT=https://<your-tenant>.base14.io
Authentication
For clusters with the security plugin enabled, add basic auth to the scrape config:
receivers:
prometheus:
config:
scrape_configs:
- job_name: opensearch
scrape_interval: 30s
metrics_path: /_prometheus/metrics
scheme: https
tls_config:
insecure_skip_verify: true # Set false in production with valid certs
basic_auth:
username: ${env:OPENSEARCH_USER}
password: ${env:OPENSEARCH_PASSWORD}
static_configs:
- targets:
- ${env:OPENSEARCH_HOST}:9200
Create a read-only monitoring role in OpenSearch Dashboards or via
the API. The monitoring account only needs cluster:monitor/*
permissions.
Filtering Metrics
OpenSearch exposes 230+ metrics including per-index breakdowns. To collect only cluster-level metrics:
receivers:
prometheus:
config:
scrape_configs:
- job_name: opensearch
scrape_interval: 30s
metrics_path: /_prometheus/metrics
static_configs:
- targets:
- ${env:OPENSEARCH_HOST}:9200
metric_relabel_configs:
- source_labels: [__name__]
regex: "opensearch_(cluster|indices|jvm|os|process|transport|http)_.*"
action: keep
Verify the Setup
Start the Collector and check for metrics within 60 seconds:
# Check Collector logs for successful scrape
docker logs otel-collector 2>&1 | grep -i "opensearch"
# Verify the metrics endpoint directly
curl -s http://localhost:9200/_prometheus/metrics \
| grep opensearch_cluster_status
# Check cluster health
curl -s http://localhost:9200/_cluster/health?pretty
Troubleshooting
Metrics endpoint returns 400 or not found
Cause: The prometheus-exporter plugin is not installed or failed to load.
Fix:
- Check installed plugins:
curl -s http://localhost:9200/_cat/plugins - Look for
prometheus-exporterin the output - If missing, install the plugin and restart the node
- Verify the plugin version matches your OpenSearch version exactly
Connection refused on port 9200
Cause: Collector cannot reach OpenSearch at the configured address.
Fix:
- Verify OpenSearch is running:
curl -s http://localhost:9200 - For Docker: ensure both containers are on the same network
- Check firewall rules if the Collector runs on a separate host
No metrics appearing in Scout
Cause: Metrics are collected but not exported.
Fix:
- Check Collector logs for export errors:
docker logs otel-collector - Verify
OTEL_EXPORTER_OTLP_ENDPOINTis set correctly - Confirm the pipeline includes both the receiver and exporter
Plugin version mismatch error
Cause: The prometheus-exporter plugin version does not match the OpenSearch version.
Fix:
- Check your OpenSearch version:
curl -s http://localhost:9200 | jq .version.number - Download the matching plugin release from GitHub releases
- Remove the old plugin and install the correct version
FAQ
Does this work with OpenSearch running in Kubernetes?
Yes. Set targets to the OpenSearch service DNS
(e.g., opensearch-cluster.opensearch.svc.cluster.local:9200).
The prometheus-exporter plugin must be installed in the container
image — use a custom Dockerfile or an init container. The Collector
can run as a sidecar or DaemonSet.
How do I monitor an OpenSearch cluster with multiple nodes?
Add all data and coordinator node endpoints to the scrape config:
receivers:
prometheus:
config:
scrape_configs:
- job_name: opensearch
metrics_path: /_prometheus/metrics
static_configs:
- targets:
- opensearch-1:9200
- opensearch-2:9200
- opensearch-3:9200
Each node exposes its own node-level and index-level metrics. Cluster health metrics are consistent across all nodes.
What is the difference between opensearch_index_* and
opensearch_indices_* metrics?
opensearch_index_* metrics are per-index breakdowns with an index
label. opensearch_indices_* metrics are node-level aggregates across
all indices on that node. For cluster-wide monitoring, the
opensearch_indices_* metrics are usually sufficient.
Can I use this instead of the OpenSearch Dashboards monitoring?
Yes. The prometheus-exporter plugin provides the same underlying cluster and node statistics that OpenSearch Dashboards displays. The OTel Collector approach centralizes metrics alongside your other infrastructure telemetry in base14 Scout.
What's Next?
- Create Dashboards: Explore pre-built dashboards or build your own. See Create Your First Dashboard
- Monitor More Components: Add monitoring for Elasticsearch, Redis, and other components
- Fine-tune Collection: Use
metric_relabel_configsto focus on cluster health, search latency, and JVM metrics for production alerting
Related Guides
- OTel Collector Configuration — Advanced collector configuration
- Docker Compose Setup — Run the Collector locally
- Kubernetes Helm Setup — Production deployment
- Elasticsearch Monitoring — Monitor Elasticsearch clusters
- Creating Alerts — Alert on OpenSearch metrics