Pre-General Availability: 2026-03-13

7 Monitor Besu Metrics with Prometheus

You can use Prometheus and kube-prometheus-stack to retrieve metrics from the Besu nodes running in Kubernetes clusters.

To install monitoring, you use Helm, kube-prometheus-stack, and the predefined values file (monitoring.yml) that is maintained in the Consensys/quorum-kubernetes repository. In this scenario, Prometheus operates inside an Istio service mesh and securely scrapes metrics via mutual TLS. The kube-prometheus-stack package installs the following software.
  • Prometheus
  • Prometheus Operator
  • Grafana
  • Alertmanager
  • Standard Kubernetes exporters

In the kube-prometheus-stack architecture, Prometheus discovers scrape targets by using a Kubernetes custom resource called a ServiceMonitor. This resource defines the services to scrape, the ports and paths that expose metrics, and configuration for TLS and mutual TLS. ServiceMonitor resources for components such as kube-state-metrics and node-exporter are automatically created when you use Helm to install kube-prometheus-stack. However, to scrape Oracle Blockchain Platform Enterprise Edition for Hyperledger Besu metrics, you must create and manage your own ServiceMonitor resources.

Install the following prerequisites.
  • Helm v3.x. You can verify your Helm version by running the following command.
    helm version
  • kubectl
You must also have command-line access to your Kubernetes cluster. For more information, see Connect to Oracle Kubernetes Engine.
  1. Install kube-prometheus-stack by using the monitoring.yml file that is compatible with Besu.
    1. Enter the following command to download the file. The monitoring.yml file is maintained in the Consensys/quorum-kubernetes repository.
      curl -o monitoring.yml \ 
      https://raw.githubusercontent.com/Consensys/quorum-kubernetes/master/helm/values/monitoring.yml 
    2. If needed, update the Grafana administrator password and configure alert receivers (for example, email or Slack) in the monitoring.yml before deployment.
    3. If it is not already present, add the Prometheus community Helm repository
      helm repo add prometheus-community https://prometheus-community.github.io/helm-charts 
      helm repo update 
    4. Install the monitoring stack in a namespace dedicated for monitoring. In the following command, the namespace is called monitoring.
      helm install monitoring prometheus-community/kube-prometheus-stack \ 
        --version 34.10.0 \ 
        --namespace monitoring \ 
        --create-namespace \ 
        --values monitoring.yml 
    5. Run the following command to check deployment status.
      kubectl get pods -n monitoring -l release=monitoring
    The installation process deploys the monitoring stack into the monitoring namespace, applies overrides from the monitoring.yml file for Besu compatibility, and creates the standard Kubernets ServiceMonitor resources that the stack requires.
  2. Add Istio annotations to the monitoring.yml file to enable Istio sidecar injection for Prometheus. Prometheus must join the Istio mesh to scrape Besu endpoints via mutual TLS. Open the monitoring.yml file for editing, and find the following section.
    prometheus: 
      prometheusSpec: 
        podMetadata:
    Add the following Istio annotations.
    prometheus: 
      prometheusSpec: 
        podMetadata: 
          annotations: 
            sidecar.istio.io/inject: "true" 
            sidecar.istio.io/userVolumeMount: | 
              [{"name": "istio-certs", "mountPath": "/etc/istio-certs", "readOnly": true}] 
            proxy.istio.io/config: | 
              proxyMetadata: 
                OUTPUT_CERTS: /etc/istio-certs 
              proxyMetadata.INBOUND_CAPTURE_PORTS: "" 
    These annotations add Prometheus to the Istio mesh, configures Istio to write workload mTLS certificates to a shared volume, prevents Envoy from intercepting Prometheus inbound traffic, and makes certificates generated by Istio available to the Prometheus container.
  3. Add volumes and volume mounts information to the monitoring.yml file to mount Istio certificates into Prometheus. Find the following text in the monitoring.yml file.
    Find the volumes section, which is initially empty.
        volumes: []
    Update the volumes section as shown in the following text.
        volumes: 
          - name: istio-certs 
            emptyDir: 
              medium: Memory 
     
    Find the volumeMounts section, which is initially empty.
        volumeMounts: []
    Update the volumeMounts section as shown in the following text.
        volumeMounts: 
          - name: istio-certs 
            mountPath: /etc/prom-certs 
            readOnly: true
    Certificates generated by Istio are now available inside Prometheus at /etc/prom-certs/.
  4. Apply the updated configuration.
    1. Upgrade the existing Helm release with the modified values file.
      helm upgrade monitoring prometheus-community/kube-prometheus-stack \ 
        --namespace monitoring \ 
        --values monitoring.yml 
    2. Verify that the Prometheus pod restarted and is running with an Istio sidecar.
      kubectl get pods -n monitoring | grep prometheus 
  5. Create a ServiceMonitor resource for Besu metrics, using the following example.
    apiVersion: monitoring.coreos.com/v1 
    kind: ServiceMonitor 
    metadata: 
      name: obp-besu-rpc-metrics 
      namespace: <besu-namespace> 
      labels: 
        release: monitoring   # Must match Prometheus serviceMonitorSelector 
    spec: 
      selector: 
        matchLabels: 
          app: besu 
          besu-role: rpc 
      namespaceSelector: 
        matchNames: 
          - <besu-namespace> 
      endpoints: 
        - port: metrics 
          path: /metrics 
          interval: 30s 
          scheme: https 
          tlsConfig: 
            caFile: /etc/prom-certs/root-cert.pem 
            certFile: /etc/prom-certs/cert-chain.pem 
            keyFile: /etc/prom-certs/key.pem 
            insecureSkipVerify: true 
    The certificate paths must match the mounted location (etc/prom-certs). The release label must match the serviceMonitorSelector value in Prometheus. The previous example scrapes metrics from RPC nodes (besu-role: rpc). To scrape metrics from boot nodes, validator nodes, or archive nodes, create separate ServiceMonitor resources and edit the besu-role label accordingly (bootnode, validator, or archive).
  6. Verify that metrics are being collected in Prometheus.
    1. Run the following command to forward the port of the Prometheus service.
      kubectl port-forward -n <namespace> prometheus-monitoring-kube-prometheus-prometheus-
    2. Open http://localhost:9090 and then select Status, and then Targets. Confirm that the Besu targets are in the UP state.
    3. Run a sample query, such as the following example.
      besu_blockchain_chain_head_transaction_count 
  7. Access Grafana to confirm that metrics are being collected. Run the following command to forward the port of the Grafana service.
    kubectl port-forward svc/monitoring-grafana -n <namespace> 3000:80 
    • URL: http://localhost:3000
    • User name: admin
    • Run the following command to retrieve the password:
      kubectl get secret -n <namespace> monitoring-grafana \ 
      -o jsonpath="{.data.admin-password}" | base64 --decode ; echo