This guide provides detailed instructions for installing and managing Metoro in an on-premises environment. It covers system requirements, installation steps, and best practices for maintaining your deployment.

Prerequisites

Before beginning the installation, ensure your environment meets the following requirements:

  • Kubernetes cluster (v1.19 or later)
  • Helm 3.x installed
  • Resource requirements per node for the Metoro Agent:
    • CPU: 0.3 cores
    • Memory: 300MB RAM
  • Total resource requirements for the Metoro Hub:
    • CPU: 4 cores
    • Memory: 8GB RAM
  • Network requirements:
    • Access to quay.io/metoro repositories for pulling images (optional if using your own private registry)
    • Internal network connectivity between cluster nodes
    • Ingress controller for external access (recommended)

Quick Start

1. Get Access to Required Resources

Contact us to get access to the Helm charts and private image repositories:

You will receive:

  • Helm repository (zipped)
  • Image repository pull secret

2. Prepare the Installation

  1. Extract the helm chart:
unzip helm.zip && cd helm
  1. Set your kubectl context:
kubectl config use-context CLUSTER_YOU_WANT_TO_INSTALL_INTO

3. Install Metoro Hub

Install the Metoro hub using Helm:

helm upgrade --install \
  --namespace metoro-hub \
  --create-namespace \
  metoro ./ \
  --set clickhouse.enabled=true \
  --set postgresql.enabled=true \
  --set onPrem.isOnPrem=true \
  --set imagePullSecret.data=<imagePullSecret-from-step-1> \
  --set apiserver.replicas=1 \
  --set ingester.replicas=1 \
  --set temporal.enabled=true \
  --set ingester.autoscaling.horizontalPodAutoscaler.enabled=false \
  --set apiserver.autoscaling.horizontalPodAutoscaler.enabled=false

If the Clickhouse pod remains in pending state, it’s likely due to insufficient cluster resources. You can adjust the resource limits in the Clickhouse StatefulSet definition.

4. Access the UI

  1. Port forward the API server:
kubectl port-forward -n metoro-hub service/apiserver 8080:80
  1. Create an account:
    • Navigate to http://localhost:8080
    • Create a new account (do not use SSO options for on-prem installations)

5. Install the Metoro Agent

  1. After logging in, select “Existing Cluster” and enter your cluster’s name
  2. Copy the exporter.secret.bearerToken value from the installation screen
  3. Run the installation command:
bash -c "$(curl -fsSL http://localhost:8080/install.sh)" -- \
  TOKEN_HERE \
  http://ingester.metoro-hub.svc.cluster.local/ingest/api/v1/otel \
  http://apiserver.metoro-hub.svc.cluster.local/api/v1/exporter \
  --existing-cluster \
  --on-prem

Advanced Configuration - Production

Minimal Production Configuration

For the metoro-hub values.yaml:

clickhouse:
  enabled: true
  auth:
    password: "CHANGE_ME_CLICKHOUSE_PASSWORD" # Use a random password

postgresql:
  enabled: true
  auth:
    password: "CHANGE_ME_POSTGRES_PASSWORD" # Use a random password

onPrem:
  isOnPrem: true

imagePullSecret:
  data: "IMAGE_PULL_SECRET"

authSecret:
  authMaterial: "CHANGE_ME_AUTH_MATERIAL" # Use a random string

apiserver:
  replicas: 2
  autoscaling:
    horizontalPodAutoscaler:
      enabled: false
  defaultOnPremAdmin:
    email: "YOUR_EMAIL_CHANGE_ME"
    password: "YOUR_PASSWORD_CHANGE_ME"
    name: "YOUR NAME_CHANGE_ME"
    organization: "YOUR_ORGANIZATION_CHANGE_ME"
    environmentName: "YOUR_ENVIRONMENT_NAME_CHANGE_ME"

temporal:
  enabled: true
  server:
    config:
      persistence:
        default:
          sql:
            password: "CHANGE_ME_POSTGRES_PASSWORD" # Use the same password as the postgres above
        visibility:
          sql:
            password: "CHANGE_ME_POSTGRES_PASSWORD" # Use the same password as the postgres above

ingester:
  replicas: 2
  autoscaling:
    horizontalPodAutoscaler:
      enabled: false

Then install with the following command:

helm upgrade --install --namespace metoro-hub --create-namespace metoro ./ -f values.yaml

For the metoro-exporter values.yaml:

exporter:
  image:
    tag: "0.841.0"
  envVars:
    mandatory:
      otlpUrl: "http://ingester.metoro-hub.svc.cluster.local/ingest/api/v1/otel"
      apiServerUrl: "http://apiserver.metoro-hub.svc.cluster.local/api/v1/exporter"
  secret:
    externalSecret:
      enabled: true
      name: "on-prem-default-exporter-token-secret"
      secretKey: "token"

nodeAgent:
  image:
    tag: "0.65.0"

Then install with the following command:

helm repo add metoro-exporter https://metoro-io.github.io/metoro-helm-charts/ ;
helm repo update metoro-exporter;
helm upgrade --install --create-namespace --namespace metoro metoro-exporter metoro-exporter/metoro-exporter -f values.yaml

Securing the Metoro Hub

Before deploying in production, you should change at least the following settings in the Metoro Hub Helm chart:

apiserver:
  defaultOnPremAdmin:
    password: "CHANGE_ME_TO_SECURE_PASSWORD"  # Change this to a secure password, you'll use this to log in to the UI for the first time

postgresql:
  auth:
    password: "CHANGE_ME_POSTGRES_PASSWORD" # Use a random password

clickhouse:
  auth:
    password: "CHANGE_ME_CLICKHOUSE_PASSWORD" # Use a random password

authSecret:
  authMaterial: "CHANGE_ME_AUTH_MATERIAL" # Use a random string

temporal:
  server:
    config:
      persistence:
        default:
          sql:
            password: "CHANGE_ME_POSTGRES_PASSWORD" # Use the same password as above
        visibility:
          sql:
            password: "CHANGE_ME_POSTGRES_PASSWORD" # Use the same password as above

onPrem:
  isOnPrem: true

Connecting the exporter to the Metoro Hub via helm

The exporter needs to be configured to connect to the Metoro hub. This can either be done through the UI or by setting the following values in the hub helm chart:

apiserver:
  defaultOnPremAdmin:
    email: "YOUR_EMAIL"
    password: "YOUR_PASSWORD"
    name: "YOUR NAME"
    organization: "YOUR ORGANIZATION"
    environmentName: "YOUR ENVIRONMENT NAME"

Then when installing the exporter, you can set the following values:

exporter:
  secret:
    externalSecret:
      enabled: true
      name: "on-prem-default-exporter-token-secret"
      secretKey: "token"

Using a different image registry

If you want to use a different image registry, you can set the imagePullSecret field in the Helm chart values file to a secret containing the pull secret.

imagePullSecret:
  name: "my-registry-credentials"
  data: "dockerconfigjson-encoded-value"

High Availability Setup

For production environments requiring high availability. We also recommend using external databases for increased availability and performance. Check out the external database configuration section for more details. The postgres chart doesn’t have great support for HA. The Clickhouse chart has built-in HA support.

ingester:
  replicas: 2
  autoscaling:
    horizontalPodAutoscaler:
      enabled: true
      minReplicas: 2
      maxReplicas: 4
      targetCPUUtilizationPercentage: 60

apiserver:
  replicas: 2
  autoscaling:
    horizontalPodAutoscaler:
      enabled: true
      minReplicas: 2
      maxReplicas: 4
      targetCPUUtilizationPercentage: 60

clickhouse:
  enabled: true
  persistence:
    size: 100Gi
  replicaCount: 3

postgresql:
  enabled: true
  persistence:
    size: 20Gi
  primary:
    replicaCount: 3

External Database Configuration

To use external databases instead of the built-in ones:

clickhouse:
  enabled: false

clickhouseSecret:
  name: "clickhouse-secret"
  clickhouseUrl: "clickhouse://xxxxxxx.us-east-1.aws.clickhouse.cloud:9440"
  clickhouseUser: "username"
  clickhousePassword: "password"
  clickhouseDatabase: "metoro"

postgresql:
  enabled: false

postgresSecret:
  name: "postgres-secret"
  postgresHost: "prod-us-east.cluster-xxxxxxx.us-east-1.rds.amazonaws.com"
  postgresPort: "5432"
  postgresUser: "postgres"
  postgresPassword: "password"
  postgresDatabase: "metoro"

# This needs to be matched with the postgresSecret values
temporal:
  server:
    config:
      persistence:
        default:
          driver: sql
          sql:
            driver: postgres12
            database: temporal
            user: postgres
            password: password
            host: "prod-us-east.cluster-xxxxxxx.us-east-1.rds.amazonaws.com"
            port: 5432
        visibility:
          driver: sql
          sql:
            driver: postgres12
            database: temporal_visibility
            user: postgres
            password: CHANGE_ME
            host: "prod-us-east.cluster-xxxxxxx.us-east-1.rds.amazonaws.com"
            port: 5432

Ingress Configuration

Enable ingress for external access:

apiserver:
  # Match this with the hostname of the ingress
  deploymentUrl: http(s)://metoro.yourdomain.com
  ingress:
    enabled: true
    className: "nginx"
    annotations:
      kubernetes.io/ingress.class: nginx
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
    hosts:
      - host: "metoro.yourdomain.com"
        paths:
          - path: /
            pathType: Prefix
    tls:
      - secretName: metoro-tls
        hosts:
          - metoro.yourdomain.com

ingester:
  ingress:
    enabled: true
    className: "nginx"
    annotations:
      kubernetes.io/ingress.class: nginx
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
    hosts:
      - host: "ingest.metoro.yourdomain.com"
        paths:
          - path: /
            pathType: Prefix
    tls:
      - secretName: metoro-ingester-tls
        hosts:
          - ingest.metoro.yourdomain.com

Maintenance

Upgrading Metoro

Minor version upgrades can just be installed using a helm upgrade command.

helm upgrade --install --namespace metoro-hub ./ -f values.yaml

Major version upgrades will require a more in-depth migration process. Each major release will have a migration guide available on the Metoro website and in the helm chart itself.

Support and Resources

For additional support:

Configuration Reference