This guide provides comprehensive configuration examples for different Metoro deployment scenarios. Start with the base configuration and add additional sections based on your requirements.

Base Configuration

Every Metoro deployment starts with this minimal configuration. This example assumes you’re using Kubernetes secrets for sensitive data (recommended).

1. Create Required Secrets

# Create namespace
kubectl create namespace metoro-hub

# Create image pull secret (provided by Metoro)
kubectl create secret generic dockerhub-credentials \
  --namespace metoro-hub \
  --from-literal=.dockerconfigjson='YOUR_PROVIDED_DOCKERCONFIG_JSON'

# Create auth secret for JWT signing by the api server
kubectl create secret generic auth-secret-external \
  --namespace metoro-hub \
  --from-literal=authsecret="$(openssl rand -base64 32)"

2. Base values.yaml

# base-values.yaml
onPrem:
  isOnPrem: true

# Image pull secret reference
imagePullSecret:
  external:
    enabled: true
    secretName: "dockerhub-credentials"
    key: ".dockerconfigjson"

# Auth secret for JWT signing
authSecret:
  external:
    enabled: true
    secretName: "auth-secret-external"
    key: "authsecret"

# Default admin configuration
apiserver:
  deploymentUrl: "https://metoro-onprem.metoro.io" # Change to the deployment url of the on-prem deployment. So if you have dns set up to point at the ingress, use that.
  replicas: 3
  image:
    tag: "onprem-10" # Don't hardcode this to a specific image digest unless you know what you are doing.
    pullPolicy: Always # This should be left as always, this way whenever the apiserver is restarted it will pull the latest image from the onprem-10 tag, which will include bug fixes and new features.
  defaultOnPremAdmin:
    email: "admin@company.com"
    password: "password_will_be_changed_immediately_after_install"
    name: "Admin User"
    organization: "Your Company"
    environmentName: "Production"
  resources:
    requests:
      cpu: 100m
      memory: 100Mi
    limits:
      cpu: 1000m
      memory: 1000Mi
  autoscaling:
    horizontalPodAutoscaler:
      enabled: false

# Basic ingester configuration
ingester:
  image:
    tag: "onprem-10" # Don't hardcode this to a specific image digest unless you know what you are doing.
    pullPolicy: Always # This should be left as always, this way whenever the ingester is restarted it will pull the latest image from the onprem-10 tag, which will include bug fixes and new features.
  resources:
    requests:
      cpu: 1
      memory: 1Gi
    limits:
      cpu: 4
      memory: 4Gi
  replicas: 3
  autoscaling:
    horizontalPodAutoscaler:
      enabled: true
      minReplicas: 3
      maxReplicas: 10
      targetCPUUtilizationPercentage: 70

# ClickHouse configuration
clickhouseSpecs:
  isOssClickhouse: true
  isSecure: false

clickhouse:
  enabled: true
  useResizableGp3Volumes: true # This assumes you are using AWS EKS, if not, set this to false.
  storage: 500Gi # Should be adjusted based on your data volume
  credentials:
    external:
      enabled: true
      secretName: "clickhouse-external"
      key: "password"
  keeper:
    replicaCount: 3
    storage: 10Gi
    resources:
      requests:
        cpu: 200m
        memory: 1Gi
      limits:
        cpu: 1000m
        memory: 2Gi
  resources:
    requests:
      cpu: 1000m # Should be tuned based on your workload
      memory: 4Gi  # Should be tuned based on your workload
    limits:
      cpu: 3000m # Should be tuned based on your workload
      memory: 12Gi # Should be tuned based on your workload
  replicaCount: 3 # 3 replicas for high availability

clickhouseSecret:
  external:
    enabled: true
    secretName: "clickhouse-external"
    keys:
      user: "user"
      password: "password"
      url: "url"
      database: "database"

# Temporal configuration
temporal:
  enabled: true

Database Configurations

Choose one of the following database configurations based on your decisions from the pre-installation checklist.

Option A: All In-Cluster Databases

Option B: External Databases

High Availability Configuration

For production environments requiring high availability, add these configurations to your base setup.

Component Scaling

# High availability for API Server
apiserver:
  replicas: 3
  autoscaling:
    horizontalPodAutoscaler:
      enabled: true
      minReplicas: 3
      maxReplicas: 10
      targetCPUUtilizationPercentage: 60
  resources:
    requests:
      cpu: 100m
      memory: 100Mi
    limits:
      cpu: 1000m
      memory: 1000Mi

# High availability for Ingester
ingester:
  replicas: 3
  autoscaling:
    horizontalPodAutoscaler:
      enabled: true
      minReplicas: 3
      maxReplicas: 10
      targetCPUUtilizationPercentage: 70
  resources:
    requests:
      cpu: 1
      memory: 1Gi
    limits:
      cpu: 4
      memory: 4Gi

# High availability for Temporal
temporal:
  server:
    replicaCount: 3

Database High Availability

# ClickHouse HA configuration
clickhouse:
  enabled: true
  storage: 500Gi
  replicaCount: 3
  keeper:
    replicaCount: 3
    storage: 10Gi
    resources:
      requests:
        cpu: 200m
        memory: 1Gi
      limits:
        cpu: 1000m
        memory: 2Gi
  resources:
    requests:
      cpu: 1000m
      memory: 4Gi
    limits:
      cpu: 3000m
      memory: 12Gi

# PostgreSQL limited HA
postgresql:
  primary:
    replicaCount: 3

Ingress Configuration

Configure external access to your Metoro deployment.

Basic Ingress

apiserver:
  deploymentUrl: "https://metoro.yourdomain.com"
  ingress:
    enabled: true
    className: "nginx"
    annotations:
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
    hosts:
      - host: "metoro.yourdomain.com"
        paths:
          - path: /
            pathType: Prefix
    tls:
      - secretName: metoro-tls
        hosts:
          - metoro.yourdomain.com

ingester:
  ingress:
    enabled: true
    className: "nginx"
    hosts:
      - host: "ingest.metoro.yourdomain.com"
        paths:
          - path: /
            pathType: Prefix

AWS ALB Ingress

For AWS deployments using Application Load Balancer:

awsAlbIngress:
  enabled: true
  certificateArn: "arn:aws:acm:region:account:certificate/id"

aws-load-balancer-controller:
  enabled: true
  clusterName: "your-eks-cluster-name"
  region: "us-east-1"
  vpcId: "vpc-xxxxxxxxx"

Resource Optimization

Small Deployment (< 50K events/min)

apiserver:
  replicas: 2
  resources:
    requests:
      cpu: 100m
      memory: 100Mi
    limits:
      cpu: 500m
      memory: 500Mi

ingester:
  replicas: 2
  resources:
    requests:
      cpu: 500m
      memory: 500Mi
    limits:
      cpu: 1
      memory: 1Gi

clickhouse:
  storage: 100Gi
  resources:
    requests:
      cpu: 1
      memory: 4Gi

Medium Deployment (50K-500K events/min)

apiserver:
  replicas: 3
  resources:
    requests:
      cpu: 500m
      memory: 500Mi
    limits:
      cpu: 2
      memory: 2Gi

ingester:
  replicas: 3
  resources:
    requests:
      cpu: 1
      memory: 1Gi
    limits:
      cpu: 4
      memory: 4Gi

clickhouse:
  storage: 500Gi
  replicaCount: 3
  resources:
    requests:
      cpu: 2
      memory: 8Gi
    limits:
      cpu: 4
      memory: 16Gi

Large Deployment (> 500K events/min)

Contact Metoro support for assistance with large-scale deployments.

Complete Production Examples

Example 1: In-Cluster ClickHouse with External PostgreSQL

This configuration is ideal for organizations that want to manage their own ClickHouse for full control over telemetry data while using a managed PostgreSQL service for metadata.

# First, create the required Kubernetes secrets

# Create namespace
kubectl create namespace metoro-hub

# Create image pull secret (provided by Metoro)
kubectl create secret generic dockerhub-credentials \
  --namespace metoro-hub \
  --from-literal=.dockerconfigjson='YOUR_PROVIDED_DOCKERCONFIG_JSON'

# Create auth secret for JWT signing
kubectl create secret generic auth-secret-external \
  --namespace metoro-hub \
  --from-literal=authsecret="$(openssl rand -base64 32)"

# Create PostgreSQL credentials
kubectl create secret generic postgres-external \
  --namespace metoro-hub \
  --from-literal=host="your-postgres.region.rds.amazonaws.com" \
  --from-literal=port="5432" \
  --from-literal=user="postgres" \
  --from-literal=password="YOUR_SECURE_PASSWORD" \
  --from-literal=database="metoro"

# Create ClickHouse credentials for in-cluster instance
kubectl create secret generic clickhouse-external \
  --namespace metoro-hub \
  --from-literal=password="$(openssl rand -base64 16)" \
  --from-literal=user="metoro" \
  --from-literal=url="clickhouse://clickhouse-metoro:9440" \
  --from-literal=database="default"
# production-hybrid-values.yaml

# API Server Configuration
apiserver:
  deploymentUrl: "https://metoro.yourdomain.com" # Change to your actual domain
  replicas: 3
  image:
    tag: "onprem-10"
    pullPolicy: Always
  resources:
    requests:
      cpu: 100m
      memory: 100Mi
    limits:
      cpu: 1000m
      memory: 1000Mi
  autoscaling:
    horizontalPodAutoscaler:
      enabled: false
      minReplicas: 3
      maxReplicas: 10
      targetCPUUtilizationPercentage: 60
  defaultOnPremAdmin:
    email: "admin@company.com"
    password: "CHANGE_ME_IMMEDIATELY"
    name: "Admin User"
    organization: "Your Company"
    environmentName: "Production"

# Ingester Configuration
ingester:
  image:
    tag: "onprem-10"
    pullPolicy: Always
  resources:
    requests:
      cpu: 1
      memory: 1Gi
    limits:
      cpu: 4
      memory: 4Gi
  replicas: 3
  autoscaling:
    horizontalPodAutoscaler:
      enabled: true
      minReplicas: 3
      maxReplicas: 10
      targetCPUUtilizationPercentage: 70

# Authentication Configuration
authSecret:
  external:
    enabled: true
    secretName: "auth-secret-external"
    key: "authsecret"

# External PostgreSQL Configuration
postgresSecret:
  external:
    enabled: true
    secretName: "postgres-external"
    keys:
      host: "host"
      port: "port"
      user: "user"
      password: "password"
      database: "database"

postgresql:
  enabled: false

# In-Cluster ClickHouse Configuration
clickhouseSpecs:
  isOssClickhouse: true
  isSecure: false

clickhouse:
  enabled: true
  useResizableGp3Volumes: true  # For AWS EKS, set false for other providers
  storage: 500Gi
  credentials:
    external:
      enabled: true
      secretName: "clickhouse-external"
      key: "password"
  keeper:
    replicaCount: 3
    storage: 10Gi
    resources:
      requests:
        cpu: 200m
        memory: 1Gi
      limits:
        cpu: 1000m
        memory: 2Gi
  resources:
    requests:
      cpu: 1000m
      memory: 4Gi
    limits:
      cpu: 3000m
      memory: 12Gi
  replicaCount: 3

clickhouseSecret:
  external:
    enabled: true
    secretName: "clickhouse-external"
    keys:
      user: "user"
      password: "password"
      url: "url"
      database: "database"

# Temporal Configuration with External PostgreSQL
temporal:
  enabled: true
  server:
    replicaCount: 3
    config:
      persistence:
        default:
          driver: sql
          sql:
            database: temporal
            user: postgres  # Must match PostgreSQL user
            host: your-postgres.region.rds.amazonaws.com  # Must match PostgreSQL host
            port: 5432
            password: ""
            existingSecret: "postgres-external"
        visibility:
          driver: sql
          sql:
            database: temporal_visibility
            user: postgres  # Must match PostgreSQL user
            host: your-postgres.region.rds.amazonaws.com  # Must match PostgreSQL host
            port: 5432
            password: ""
            existingSecret: "postgres-external"

# Image Pull Configuration
imagePullSecret:
  external:
    enabled: true
    secretName: "dockerhub-credentials"
    key: ".dockerconfigjson"

# On-Prem Settings
onPrem:
  isOnPrem: true
  enableJwtGenerator: false

# AWS ALB Ingress (if using AWS)
awsAlbIngress:
  enabled: true
  certificateArn: "arn:aws:acm:region:account:certificate/your-cert-id"

# AWS Load Balancer Controller (if using AWS EKS)
aws-load-balancer-controller:
  enabled: true
  clusterName: "your-eks-cluster-name"
  region: "us-east-1"
  vpcId: "vpc-xxxxxxxxx"
  serviceAccount:
    create: false
    name: aws-load-balancer-controller

Example 2: Fully External Databases

Here’s the complete example for a production deployment with both external databases:

# production-values.yaml
onPrem:
  isOnPrem: true

# Image pull secret
imagePullSecret:
  external:
    enabled: true
    secretName: "dockerhub-credentials"
    key: ".dockerconfigjson"

# Auth configuration
authSecret:
  external:
    enabled: true
    secretName: "auth-secret-external"
    key: "authsecret"

# Disable in-cluster databases
postgresql:
  enabled: false

clickhouse:
  enabled: false

# External database secrets
postgresSecret:
  external:
    enabled: true
    secretName: "postgres-external"
    keys:
      host: "host"
      port: "port"
      user: "user"
      password: "password"
      database: "database"

clickhouseSecret:
  external:
    enabled: true
    secretName: "clickhouse-external"
    keys:
      user: "user"
      password: "password"
      url: "url"
      database: "database"

clickhouseSpecs:
  isOssClickhouse: false
  isSecure: true

# API Server HA configuration
apiserver:
  deploymentUrl: "https://metoro.company.com"
  replicas: 3
  image:
    tag: "onprem-10"
    pullPolicy: Always
  autoscaling:
    horizontalPodAutoscaler:
      enabled: true
      minReplicas: 3
      maxReplicas: 10
      targetCPUUtilizationPercentage: 60
  resources:
    requests:
      cpu: 500m
      memory: 500Mi
    limits:
      cpu: 2
      memory: 2Gi
  defaultOnPremAdmin:
    email: "admin@company.com"
    password: "CHANGE_ME_IMMEDIATELY"
    name: "Admin User"
    organization: "Your Company"
    environmentName: "Production"
  ingress:
    enabled: true
    className: "nginx"
    hosts:
      - host: "metoro.company.com"
        paths:
          - path: /
            pathType: Prefix

# Ingester HA configuration
ingester:
  replicas: 3
  image:
    tag: "onprem-10"
    pullPolicy: Always
  autoscaling:
    horizontalPodAutoscaler:
      enabled: true
      minReplicas: 3
      maxReplicas: 10
      targetCPUUtilizationPercentage: 70
  resources:
    requests:
      cpu: 1
      memory: 1Gi
    limits:
      cpu: 4
      memory: 4Gi

# Temporal configuration
temporal:
  enabled: true
  server:
    replicaCount: 3
    config:
      persistence:
        default:
          driver: sql
          sql:
            driver: postgres12
            database: temporal
            user: metoro_admin
            host: metoro-prod.region.rds.amazonaws.com
            port: 5432
            password: ""
            existingSecret: "postgres-external"
        visibility:
          driver: sql
          sql:
            driver: postgres12
            database: temporal_visibility
            user: metoro_admin
            host: metoro-prod.region.rds.amazonaws.com
            port: 5432
            password: ""
            existingSecret: "postgres-external"

Choosing Between Examples

AspectExample 1 (Hybrid)Example 2 (Fully External)
Best ForOrganizations wanting control over telemetry dataFully managed solution
ClickHouse ManagementSelf-managed in clusterManaged service
PostgreSQL ManagementManaged serviceManaged service
Operational OverheadMediumLow
CostLower (only RDS costs)Higher (both services)
Data ControlFull control over telemetryData in cloud providers

Next Steps

  1. Choose the configuration example that best fits your requirements
  2. Create the required Kubernetes secrets
  3. Copy the appropriate example and customize:
    • Replace placeholder values (domains, passwords, etc.)
    • Adjust resource allocations based on your workload
    • Configure ingress based on your infrastructure
  4. Save as values.yaml
  5. Proceed to the installation guide
  6. After installation, refer to the maintenance guide for ongoing operations

For assistance with sizing or configuration questions, contact your Metoro representative or join our Community Slack.