Concepts
Metrics
Profiling
Kubernetes Resources
Infrastructure
Alerts & Monitoring
Uptime Monitoring
Administration
Configuration
Complete configuration guide for Metoro on-premises deployments
This guide provides comprehensive configuration examples for different Metoro deployment scenarios. Start with the base configuration and add additional sections based on your requirements.
Base Configuration
Every Metoro deployment starts with this minimal configuration. This example assumes you’re using Kubernetes secrets for sensitive data (recommended).
1. Create Required Secrets
# Create namespace
kubectl create namespace metoro-hub
# Create image pull secret (provided by Metoro)
kubectl create secret generic dockerhub-credentials \
--namespace metoro-hub \
--from-literal=.dockerconfigjson='YOUR_PROVIDED_DOCKERCONFIG_JSON'
# Create auth secret for JWT signing by the api server
kubectl create secret generic auth-secret-external \
--namespace metoro-hub \
--from-literal=authsecret="$(openssl rand -base64 32)"
2. Base values.yaml
# base-values.yaml
onPrem:
isOnPrem: true
# Image pull secret reference
imagePullSecret:
external:
enabled: true
secretName: "dockerhub-credentials"
key: ".dockerconfigjson"
# Auth secret for JWT signing
authSecret:
external:
enabled: true
secretName: "auth-secret-external"
key: "authsecret"
# Default admin configuration
apiserver:
deploymentUrl: "https://metoro-onprem.metoro.io" # Change to the deployment url of the on-prem deployment. So if you have dns set up to point at the ingress, use that.
replicas: 3
image:
tag: "onprem-10" # Don't hardcode this to a specific image digest unless you know what you are doing.
pullPolicy: Always # This should be left as always, this way whenever the apiserver is restarted it will pull the latest image from the onprem-10 tag, which will include bug fixes and new features.
defaultOnPremAdmin:
email: "admin@company.com"
password: "password_will_be_changed_immediately_after_install"
name: "Admin User"
organization: "Your Company"
environmentName: "Production"
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 1000m
memory: 1000Mi
autoscaling:
horizontalPodAutoscaler:
enabled: false
# Basic ingester configuration
ingester:
image:
tag: "onprem-10" # Don't hardcode this to a specific image digest unless you know what you are doing.
pullPolicy: Always # This should be left as always, this way whenever the ingester is restarted it will pull the latest image from the onprem-10 tag, which will include bug fixes and new features.
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 4
memory: 4Gi
replicas: 3
autoscaling:
horizontalPodAutoscaler:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
# ClickHouse configuration
clickhouseSpecs:
isOssClickhouse: true
isSecure: false
clickhouse:
enabled: true
useResizableGp3Volumes: true # This assumes you are using AWS EKS, if not, set this to false.
storage: 500Gi # Should be adjusted based on your data volume
credentials:
external:
enabled: true
secretName: "clickhouse-external"
key: "password"
keeper:
replicaCount: 3
storage: 10Gi
resources:
requests:
cpu: 200m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
resources:
requests:
cpu: 1000m # Should be tuned based on your workload
memory: 4Gi # Should be tuned based on your workload
limits:
cpu: 3000m # Should be tuned based on your workload
memory: 12Gi # Should be tuned based on your workload
replicaCount: 3 # 3 replicas for high availability
clickhouseSecret:
external:
enabled: true
secretName: "clickhouse-external"
keys:
user: "user"
password: "password"
url: "url"
database: "database"
# Temporal configuration
temporal:
enabled: true
Database Configurations
Choose one of the following database configurations based on your decisions from the pre-installation checklist.
Option A: All In-Cluster Databases
For development or small deployments where you want everything self-contained.
# Create ClickHouse credentials
kubectl create secret generic clickhouse-credentials \
--namespace metoro-hub \
--from-literal=password="$(openssl rand -base64 16)" \
--from-literal=user="metoro" \
--from-literal=url="clickhouse://clickhouse-metoro:9440" \
--from-literal=database="default"
Add to your values.yaml:
# In-cluster PostgreSQL
postgresql:
enabled: true
auth:
password: "CHANGE_ME_SECURE_POSTGRES_PASSWORD"
persistence:
size: 20Gi
# In-cluster ClickHouse
clickhouse:
enabled: true
storage: 200Gi # Adjust based on data volume
replicaCount: 1
credentials:
external:
enabled: true
secretName: "clickhouse-credentials"
key: "password"
clickhouseSecret:
external:
enabled: true
secretName: "clickhouse-credentials"
keys:
user: "user"
password: "password"
url: "url"
database: "database"
clickhouseSpecs:
isOssClickhouse: true
isSecure: false
Option B: External Databases
For production deployments using managed database services.
# Create PostgreSQL credentials
kubectl create secret generic postgres-external \
--namespace metoro-hub \
--from-literal=host="your-postgres.region.rds.amazonaws.com" \
--from-literal=port="5432" \
--from-literal=user="metoro_admin" \
--from-literal=password="YOUR_SECURE_PASSWORD" \
--from-literal=database="metoro"
# Create ClickHouse credentials
kubectl create secret generic clickhouse-external \
--namespace metoro-hub \
--from-literal=url="clickhouse://your-clickhouse.region.clickhouse.cloud:9440" \
--from-literal=user="default" \
--from-literal=password="YOUR_SECURE_PASSWORD" \
--from-literal=database="metoro"
Add to your values.yaml:
# Disable in-cluster databases
postgresql:
enabled: false
clickhouse:
enabled: false
# External PostgreSQL configuration
postgresSecret:
external:
enabled: true
secretName: "postgres-external"
keys:
host: "host"
port: "port"
user: "user"
password: "password"
database: "database"
# External ClickHouse configuration
clickhouseSecret:
external:
enabled: true
secretName: "clickhouse-external"
keys:
user: "user"
password: "password"
url: "url"
database: "database"
clickhouseSpecs:
isOssClickhouse: true # Set to false if using clickhouse cloud (SharedMergeTree)
isSecure: true # Set based on your ClickHouse configuration
# Temporal configuration for external PostgreSQL
temporal:
enabled: true
server:
config:
persistence:
default:
driver: sql
sql:
driver: postgres12
database: temporal
user: metoro_admin # Must match PostgreSQL user
host: your-postgres.region.rds.amazonaws.com # Must match PostgreSQL host
port: 5432
password: ""
existingSecret: "postgres-external"
visibility:
driver: sql
sql:
driver: postgres12
database: temporal_visibility
user: metoro_admin # Must match PostgreSQL user
host: your-postgres.region.rds.amazonaws.com # Must match PostgreSQL host
port: 5432
password: ""
existingSecret: "postgres-external"
When using external PostgreSQL, ensure you’ve created the required databases:
metoro
- Main application databasetemporal
- Temporal workflow databasetemporal_visibility
- Temporal visibility database
High Availability Configuration
For production environments requiring high availability, add these configurations to your base setup.
Component Scaling
# High availability for API Server
apiserver:
replicas: 3
autoscaling:
horizontalPodAutoscaler:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 60
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 1000m
memory: 1000Mi
# High availability for Ingester
ingester:
replicas: 3
autoscaling:
horizontalPodAutoscaler:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 4
memory: 4Gi
# High availability for Temporal
temporal:
server:
replicaCount: 3
Database High Availability
# ClickHouse HA configuration
clickhouse:
enabled: true
storage: 500Gi
replicaCount: 3
keeper:
replicaCount: 3
storage: 10Gi
resources:
requests:
cpu: 200m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
resources:
requests:
cpu: 1000m
memory: 4Gi
limits:
cpu: 3000m
memory: 12Gi
# PostgreSQL limited HA
postgresql:
primary:
replicaCount: 3
# ClickHouse HA configuration
clickhouse:
enabled: true
storage: 500Gi
replicaCount: 3
keeper:
replicaCount: 3
storage: 10Gi
resources:
requests:
cpu: 200m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
resources:
requests:
cpu: 1000m
memory: 4Gi
limits:
cpu: 3000m
memory: 12Gi
# PostgreSQL limited HA
postgresql:
primary:
replicaCount: 3
For external databases, high availability is typically handled by the managed service:
- AWS RDS: Multi-AZ deployments
- ClickHouse Cloud: Built-in replication
- Google Cloud SQL: High availability option
- Azure Database: Zone redundant HA
Ingress Configuration
Configure external access to your Metoro deployment.
Basic Ingress
apiserver:
deploymentUrl: "https://metoro.yourdomain.com"
ingress:
enabled: true
className: "nginx"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: "metoro.yourdomain.com"
paths:
- path: /
pathType: Prefix
tls:
- secretName: metoro-tls
hosts:
- metoro.yourdomain.com
ingester:
ingress:
enabled: true
className: "nginx"
hosts:
- host: "ingest.metoro.yourdomain.com"
paths:
- path: /
pathType: Prefix
AWS ALB Ingress
For AWS deployments using Application Load Balancer:
awsAlbIngress:
enabled: true
certificateArn: "arn:aws:acm:region:account:certificate/id"
aws-load-balancer-controller:
enabled: true
clusterName: "your-eks-cluster-name"
region: "us-east-1"
vpcId: "vpc-xxxxxxxxx"
Resource Optimization
Small Deployment (< 50K events/min)
apiserver:
replicas: 2
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 500Mi
ingester:
replicas: 2
resources:
requests:
cpu: 500m
memory: 500Mi
limits:
cpu: 1
memory: 1Gi
clickhouse:
storage: 100Gi
resources:
requests:
cpu: 1
memory: 4Gi
Medium Deployment (50K-500K events/min)
apiserver:
replicas: 3
resources:
requests:
cpu: 500m
memory: 500Mi
limits:
cpu: 2
memory: 2Gi
ingester:
replicas: 3
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 4
memory: 4Gi
clickhouse:
storage: 500Gi
replicaCount: 3
resources:
requests:
cpu: 2
memory: 8Gi
limits:
cpu: 4
memory: 16Gi
Large Deployment (> 500K events/min)
Contact Metoro support for assistance with large-scale deployments.
Complete Production Examples
Example 1: In-Cluster ClickHouse with External PostgreSQL
This configuration is ideal for organizations that want to manage their own ClickHouse for full control over telemetry data while using a managed PostgreSQL service for metadata.
# First, create the required Kubernetes secrets
# Create namespace
kubectl create namespace metoro-hub
# Create image pull secret (provided by Metoro)
kubectl create secret generic dockerhub-credentials \
--namespace metoro-hub \
--from-literal=.dockerconfigjson='YOUR_PROVIDED_DOCKERCONFIG_JSON'
# Create auth secret for JWT signing
kubectl create secret generic auth-secret-external \
--namespace metoro-hub \
--from-literal=authsecret="$(openssl rand -base64 32)"
# Create PostgreSQL credentials
kubectl create secret generic postgres-external \
--namespace metoro-hub \
--from-literal=host="your-postgres.region.rds.amazonaws.com" \
--from-literal=port="5432" \
--from-literal=user="postgres" \
--from-literal=password="YOUR_SECURE_PASSWORD" \
--from-literal=database="metoro"
# Create ClickHouse credentials for in-cluster instance
kubectl create secret generic clickhouse-external \
--namespace metoro-hub \
--from-literal=password="$(openssl rand -base64 16)" \
--from-literal=user="metoro" \
--from-literal=url="clickhouse://clickhouse-metoro:9440" \
--from-literal=database="default"
# production-hybrid-values.yaml
# API Server Configuration
apiserver:
deploymentUrl: "https://metoro.yourdomain.com" # Change to your actual domain
replicas: 3
image:
tag: "onprem-10"
pullPolicy: Always
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 1000m
memory: 1000Mi
autoscaling:
horizontalPodAutoscaler:
enabled: false
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 60
defaultOnPremAdmin:
email: "admin@company.com"
password: "CHANGE_ME_IMMEDIATELY"
name: "Admin User"
organization: "Your Company"
environmentName: "Production"
# Ingester Configuration
ingester:
image:
tag: "onprem-10"
pullPolicy: Always
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 4
memory: 4Gi
replicas: 3
autoscaling:
horizontalPodAutoscaler:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
# Authentication Configuration
authSecret:
external:
enabled: true
secretName: "auth-secret-external"
key: "authsecret"
# External PostgreSQL Configuration
postgresSecret:
external:
enabled: true
secretName: "postgres-external"
keys:
host: "host"
port: "port"
user: "user"
password: "password"
database: "database"
postgresql:
enabled: false
# In-Cluster ClickHouse Configuration
clickhouseSpecs:
isOssClickhouse: true
isSecure: false
clickhouse:
enabled: true
useResizableGp3Volumes: true # For AWS EKS, set false for other providers
storage: 500Gi
credentials:
external:
enabled: true
secretName: "clickhouse-external"
key: "password"
keeper:
replicaCount: 3
storage: 10Gi
resources:
requests:
cpu: 200m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
resources:
requests:
cpu: 1000m
memory: 4Gi
limits:
cpu: 3000m
memory: 12Gi
replicaCount: 3
clickhouseSecret:
external:
enabled: true
secretName: "clickhouse-external"
keys:
user: "user"
password: "password"
url: "url"
database: "database"
# Temporal Configuration with External PostgreSQL
temporal:
enabled: true
server:
replicaCount: 3
config:
persistence:
default:
driver: sql
sql:
database: temporal
user: postgres # Must match PostgreSQL user
host: your-postgres.region.rds.amazonaws.com # Must match PostgreSQL host
port: 5432
password: ""
existingSecret: "postgres-external"
visibility:
driver: sql
sql:
database: temporal_visibility
user: postgres # Must match PostgreSQL user
host: your-postgres.region.rds.amazonaws.com # Must match PostgreSQL host
port: 5432
password: ""
existingSecret: "postgres-external"
# Image Pull Configuration
imagePullSecret:
external:
enabled: true
secretName: "dockerhub-credentials"
key: ".dockerconfigjson"
# On-Prem Settings
onPrem:
isOnPrem: true
enableJwtGenerator: false
# AWS ALB Ingress (if using AWS)
awsAlbIngress:
enabled: true
certificateArn: "arn:aws:acm:region:account:certificate/your-cert-id"
# AWS Load Balancer Controller (if using AWS EKS)
aws-load-balancer-controller:
enabled: true
clusterName: "your-eks-cluster-name"
region: "us-east-1"
vpcId: "vpc-xxxxxxxxx"
serviceAccount:
create: false
name: aws-load-balancer-controller
Example 2: Fully External Databases
Here’s the complete example for a production deployment with both external databases:
# production-values.yaml
onPrem:
isOnPrem: true
# Image pull secret
imagePullSecret:
external:
enabled: true
secretName: "dockerhub-credentials"
key: ".dockerconfigjson"
# Auth configuration
authSecret:
external:
enabled: true
secretName: "auth-secret-external"
key: "authsecret"
# Disable in-cluster databases
postgresql:
enabled: false
clickhouse:
enabled: false
# External database secrets
postgresSecret:
external:
enabled: true
secretName: "postgres-external"
keys:
host: "host"
port: "port"
user: "user"
password: "password"
database: "database"
clickhouseSecret:
external:
enabled: true
secretName: "clickhouse-external"
keys:
user: "user"
password: "password"
url: "url"
database: "database"
clickhouseSpecs:
isOssClickhouse: false
isSecure: true
# API Server HA configuration
apiserver:
deploymentUrl: "https://metoro.company.com"
replicas: 3
image:
tag: "onprem-10"
pullPolicy: Always
autoscaling:
horizontalPodAutoscaler:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 60
resources:
requests:
cpu: 500m
memory: 500Mi
limits:
cpu: 2
memory: 2Gi
defaultOnPremAdmin:
email: "admin@company.com"
password: "CHANGE_ME_IMMEDIATELY"
name: "Admin User"
organization: "Your Company"
environmentName: "Production"
ingress:
enabled: true
className: "nginx"
hosts:
- host: "metoro.company.com"
paths:
- path: /
pathType: Prefix
# Ingester HA configuration
ingester:
replicas: 3
image:
tag: "onprem-10"
pullPolicy: Always
autoscaling:
horizontalPodAutoscaler:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 4
memory: 4Gi
# Temporal configuration
temporal:
enabled: true
server:
replicaCount: 3
config:
persistence:
default:
driver: sql
sql:
driver: postgres12
database: temporal
user: metoro_admin
host: metoro-prod.region.rds.amazonaws.com
port: 5432
password: ""
existingSecret: "postgres-external"
visibility:
driver: sql
sql:
driver: postgres12
database: temporal_visibility
user: metoro_admin
host: metoro-prod.region.rds.amazonaws.com
port: 5432
password: ""
existingSecret: "postgres-external"
Choosing Between Examples
Aspect | Example 1 (Hybrid) | Example 2 (Fully External) |
---|---|---|
Best For | Organizations wanting control over telemetry data | Fully managed solution |
ClickHouse Management | Self-managed in cluster | Managed service |
PostgreSQL Management | Managed service | Managed service |
Operational Overhead | Medium | Low |
Cost | Lower (only RDS costs) | Higher (both services) |
Data Control | Full control over telemetry | Data in cloud providers |
Next Steps
- Choose the configuration example that best fits your requirements
- Create the required Kubernetes secrets
- Copy the appropriate example and customize:
- Replace placeholder values (domains, passwords, etc.)
- Adjust resource allocations based on your workload
- Configure ingress based on your infrastructure
- Save as
values.yaml
- Proceed to the installation guide
- After installation, refer to the maintenance guide for ongoing operations
For assistance with sizing or configuration questions, contact your Metoro representative or join our Community Slack.
Was this page helpful?
- Base Configuration
- 1. Create Required Secrets
- 2. Base values.yaml
- Database Configurations
- Option A: All In-Cluster Databases
- Option B: External Databases
- High Availability Configuration
- Component Scaling
- Database High Availability
- Ingress Configuration
- Basic Ingress
- AWS ALB Ingress
- Resource Optimization
- Small Deployment (< 50K events/min)
- Medium Deployment (50K-500K events/min)
- Large Deployment (> 500K events/min)
- Complete Production Examples
- Example 1: In-Cluster ClickHouse with External PostgreSQL
- Example 2: Fully External Databases
- Choosing Between Examples
- Next Steps