On-Prem
On-Premises Installation & Management
This guide provides detailed instructions for installing and managing Metoro in an on-premises environment. It covers system requirements, installation steps, and best practices for maintaining your deployment.
Prerequisites
Before beginning the installation, ensure your environment meets the following requirements:
- Kubernetes cluster (v1.19 or later)
- Helm 3.x installed
- Resource requirements per node for the Metoro Agent:
- CPU: 0.3 cores
- Memory: 300MB RAM
- Total resource requirements for the Metoro Hub:
- CPU: 4 cores
- Memory: 8GB RAM
- Network requirements:
- Access to quay.io/metoro repositories for pulling images (optional if using your own private registry)
- Internal network connectivity between cluster nodes
- Ingress controller for external access (recommended)
Quick Start
1. Get Access to Required Resources
Contact us to get access to the Helm charts and private image repositories:
- Join our Community Slack Channel
- Email us at support@metoro.io
You will receive:
- Helm repository (zipped)
- Image repository pull secret
2. Prepare the Installation
- Extract the helm chart:
unzip helm.zip && cd helm
- Set your kubectl context:
kubectl config use-context CLUSTER_YOU_WANT_TO_INSTALL_INTO
3. Install Metoro Hub
Install the Metoro hub using Helm:
helm upgrade --install \
--namespace metoro-hub \
--create-namespace \
metoro ./ \
--set clickhouse.enabled=true \
--set postgresql.enabled=true \
--set onPrem.isOnPrem=true \
--set imagePullSecret.data=<imagePullSecret-from-step-1> \
--set apiserver.replicas=1 \
--set ingester.replicas=1 \
--set temporal.enabled=true \
--set ingester.autoscaling.horizontalPodAutoscaler.enabled=false \
--set apiserver.autoscaling.horizontalPodAutoscaler.enabled=false
Note
If the Clickhouse pod remains in pending state, it's likely due to insufficient cluster resources. You can adjust the resource limits in the Clickhouse StatefulSet definition.
4. Access the UI
- Port forward the API server:
kubectl port-forward -n metoro-hub service/apiserver 8080:80
- Create an account:
- Navigate to http://localhost:8080
- Create a new account (do not use SSO options for on-prem installations)
5. Install the Metoro Agent
- After logging in, select "Existing Cluster" and enter your cluster's name
- Copy the
exporter.secret.bearerToken
value from the installation screen - Run the installation command:
bash -c "$(curl -fsSL http://localhost:8080/install.sh)" -- \
TOKEN_HERE \
http://ingester.metoro-hub.svc.cluster.local/ingest/api/v1/otel \
http://apiserver.metoro-hub.svc.cluster.local/api/v1/exporter \
--existing-cluster \
--on-prem
Advanced Configuration - Production
Minimal Production Configuration
For the metoro-hub values.yaml
clickhouse:
enabled: true
auth:
password: "CHANGE_ME_CLICKHOUSE_PASSWORD" # Use a random password
postgresql:
enabled: true
auth:
password: "CHANGE_ME_POSTGRES_PASSWORD" # Use a random password
onPrem:
isOnPrem: true
imagePullSecret:
data: "IMAGE_PULL_SECRET"
authSecret:
authMaterial: "CHANGE_ME_AUTH_MATERIAL" # Use a random string
apiserver:
replicas: 2
autoscaling:
horizontalPodAutoscaler:
enabled: false
defaultOnPremAdmin:
email: "YOUR_EMAIL_CHANGE_ME"
password: "YOUR_PASSWORD_CHANGE_ME"
name: "YOUR NAME_CHANGE_ME"
organization: "YOUR_ORGANIZATION_CHANGE_ME"
environmentName: "YOUR_ENVIRONMENT_NAME_CHANGE_ME"
temporal:
enabled: true
server:
config:
persistence:
default:
sql:
password: "CHANGE_ME_POSTGRES_PASSWORD" # Use the same password as the postgres above
visibility:
sql:
password: "CHANGE_ME_POSTGRES_PASSWORD" # Use the same password as the postgres above
ingester:
replicas: 2
autoscaling:
horizontalPodAutoscaler:
enabled: false
Then install with the following command:
helm upgrade --install --namespace metoro-hub --create-namespace metoro ./ -f values.yaml
For the metoro-exporter values.yaml:
exporter:
image:
tag: "0.841.0"
envVars:
mandatory:
otlpUrl: "http://ingester.metoro-hub.svc.cluster.local/ingest/api/v1/otel"
apiServerUrl: "http://apiserver.metoro-hub.svc.cluster.local/api/v1/exporter"
secret:
externalSecret:
enabled: true
name: "on-prem-default-exporter-token-secret"
secretKey: "token"
nodeAgent:
image:
tag: "0.65.0"
Then install with the following command:
helm repo add metoro-exporter https://metoro-io.github.io/metoro-helm-charts/ ;
helm repo update metoro-exporter;
helm upgrade --install --create-namespace --namespace metoro metoro-exporter metoro-exporter/metoro-exporter -f values.yaml
Securing the Metoro Hub
Before deploying in production, you should change at least the following settings in the Metoro Hub Helm chart:
apiserver:
defaultOnPremAdmin:
password: "CHANGE_ME_TO_SECURE_PASSWORD" # Change this to a secure password, you'll use this to log in to the UI for the first time
postgresql:
auth:
password: "CHANGE_ME_POSTGRES_PASSWORD" # Use a random password
clickhouse:
auth:
password: "CHANGE_ME_CLICKHOUSE_PASSWORD" # Use a random password
authSecret:
authMaterial: "CHANGE_ME_AUTH_MATERIAL" # Use a random string
temporal:
server:
config:
persistence:
default:
sql:
password: "CHANGE_ME_POSTGRES_PASSWORD" # Use the same password as above
visibility:
sql:
password: "CHANGE_ME_POSTGRES_PASSWORD" # Use the same password as above
onPrem:
isOnPrem: true
Connecting the exporter to the Metoro Hub via helm
The exporter needs to be configured to connect to the Metoro hub. This can either be done through the UI or by setting the following values in the hub helm chart:
apiserver:
defaultOnPremAdmin:
email: "YOUR_EMAIL"
password: "YOUR_PASSWORD"
name: "YOUR NAME"
organization: "YOUR ORGANIZATION"
environmentName: "YOUR ENVIRONMENT NAME"
Then when installing the exporter, you can set the following values:
exporter:
secret:
externalSecret:
enabled: true
name: "on-prem-default-exporter-token-secret"
secretKey: "token"
Using a different image registry
If you want to use a different image registry, you can set the imagePullSecret
field in the Helm chart values file to a secret containing the pull secret.
imagePullSecret:
name: "my-registry-credentials"
data: "dockerconfigjson-encoded-value"
High Availability Setup
For production environments requiring high availability. We also recommend using external databases for increased availability and performance. Check out the external database configuration section for more details. The postgres chart doesn't have great support for HA. The Clickhouse chart has built-in HA support.
ingester:
replicas: 2
autoscaling:
horizontalPodAutoscaler:
enabled: true
minReplicas: 2
maxReplicas: 4
targetCPUUtilizationPercentage: 60
apiserver:
replicas: 2
autoscaling:
horizontalPodAutoscaler:
enabled: true
minReplicas: 2
maxReplicas: 4
targetCPUUtilizationPercentage: 60
clickhouse:
enabled: true
persistence:
size: 100Gi
replicaCount: 3
postgresql:
enabled: true
persistence:
size: 20Gi
primary:
replicaCount: 3
External Database Configuration
To use external databases instead of the built-in ones:
clickhouse:
enabled: false
clickhouseSecret:
name: "clickhouse-secret"
clickhouseUrl: "clickhouse://xxxxxxx.us-east-1.aws.clickhouse.cloud:9440"
clickhouseUser: "username"
clickhousePassword: "password"
clickhouseDatabase: "metoro"
postgresql:
enabled: false
postgresSecret:
name: "postgres-secret"
postgresHost: "prod-us-east.cluster-xxxxxxx.us-east-1.rds.amazonaws.com"
postgresPort: "5432"
postgresUser: "postgres"
postgresPassword: "password"
postgresDatabase: "metoro"
# This needs to be matched with the postgresSecret values
temporal:
server:
config:
persistence:
default:
driver: sql
sql:
driver: postgres12
database: temporal
user: postgres
password: password
host: "prod-us-east.cluster-xxxxxxx.us-east-1.rds.amazonaws.com"
port: 5432
visibility:
driver: sql
sql:
driver: postgres12
database: temporal_visibility
user: postgres
password: CHANGE_ME
host: "prod-us-east.cluster-xxxxxxx.us-east-1.rds.amazonaws.com"
port: 5432
Ingress Configuration
Enable ingress for external access:
apiserver:
# Match this with the hostname of the ingress
deploymentUrl: http(s)://metoro.yourdomain.com
ingress:
enabled: true
className: "nginx"
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: "metoro.yourdomain.com"
paths:
- path: /
pathType: Prefix
tls:
- secretName: metoro-tls
hosts:
- metoro.yourdomain.com
ingester:
ingress:
enabled: true
className: "nginx"
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: "ingest.metoro.yourdomain.com"
paths:
- path: /
pathType: Prefix
tls:
- secretName: metoro-ingester-tls
hosts:
- ingest.metoro.yourdomain.com
Maintenance
Upgrading Metoro
Minor version upgrades can just be installed using a helm upgrade command.
helm upgrade --install --namespace metoro-hub ./ -f values.yaml
Major version upgrades will require a more in-depth migration process. Each major release will have a migration guide available on the Metoro website and in the helm chart itself.
Support and Resources
For additional support:
- Reach out to us directly via your dedicated slack connect channel
- Join our Slack community
- Contact us at support@metoro.io
- Live chat via intercom on metoro.io (bottom right of the page)
Full configuration reference
Below is the full list of configuration options for the Metoro Helm chart.
Key | Type | Default | Description |
---|---|---|---|
apiserver.defaultOnPremAdmin.email | string | "admin@metoro.io" | Default admin email address set up on first login |
apiserver.defaultOnPremAdmin.environmentName | string | "Default Environment" | Default environment name set up on first login |
apiserver.defaultOnPremAdmin.name | string | "Admin" | Default admin name set up on first login |
apiserver.defaultOnPremAdmin.organization | string | "Default Organization" | Default organization name set up on first login |
apiserver.defaultOnPremAdmin.password | string | "admin123" | Default admin password set up on first login |
apiserver.defaultOnPremAdmin.serviceAccount.annotations | object | {} | Service account annotations |
apiserver.defaultOnPremAdmin.serviceAccount.create | boolean | true | Whether to create a service account |
apiserver.deploymentUrl | string | "https://somedeploymenturl.tld..." | Deployment URL for the API server |
apiserver.image.pullPolicy | string | "IfNotPresent" | Image pull policy for API server container |
apiserver.image.repository | string | "quay.io/metoro/metoro-apiserver" | Docker image repository for API server |
apiserver.ingress.annotations | object | {"kubernetes.io/ingress.class": "nginx"} | Ingress annotations for API server |
apiserver.ingress.className | string | "nginx" | Ingress class name for API server |
apiserver.ingress.enabled | boolean | false | Enable/disable ingress for API server |
apiserver.ingress.hosts[0].host | string | "api.local.test" | Ingress hostname for API server |
apiserver.ingress.hosts[0].paths[0].path | string | "/" | Path for ingress rule |
apiserver.ingress.hosts[0].paths[0].pathType | string | "Prefix" | Path type for ingress rule |
apiserver.name | string | "apiserver" | Name of the API server component |
apiserver.replicas | integer | 4 | Number of API server replicas |
apiserver.resources.limits.cpu | string/number | 4 | CPU resource limit for API server |
apiserver.resources.limits.memory | string | "16Gi" | Memory resource limit for API server |
apiserver.resources.requests.cpu | string/number | 1 | Requested CPU resources for API server |
apiserver.resources.requests.memory | string | "2Gi" | Requested memory resources for API server |
apiserver.service.name | string | "apiserver" | Name of the API server service |
apiserver.service.port | integer | 80 | Service port for API server |
apiserver.service.targetPort | integer | 8080 | Target port for API server service |
apiserver.service.type | string | "ClusterIP" | Kubernetes service type for API server |
authSecret.authMaterial | string | "SOME_AUTH_MATERIAL" | Authentication material used to sign JWTs for API server and ingester |
authSecret.name | string | "auth-secret" | Name of the authentication secret |
awsSecret.credentialsFileContents | string | "SOME_AWS_SECRET" | AWS credentials file contents, currently not support in on-premises |
awsSecret.name | string | "aws-secret" | Name of the AWS secret, currently not support in on-premises |
awsSesSecret.awsSesAccessKeyId | string | "SOME_AWS_SES_ACCESS_KEY_ID" | AWS SES access key ID, currently not support in on-premises |
awsSesSecret.awsSesRegion | string | "SOME_AWS_SES_REGION" | AWS SES region, currently not support in on-premises |
awsSesSecret.awsSesSecretKey | string | "SOME_AWS_SES_SECRET_KEY" | AWS SES secret key, currently not support in on-premises |
awsSesSecret.name | string | "aws-ses-secret" | Name of the AWS SES secret, currently not support in on-premises |
clickhouse.containerPorts.tcp | integer | 9440 | TCP container port for in-cluster ClickHouse |
clickhouse.containerPorts.tcpSecure | integer | 20434 | Secure TCP container port for in-cluster ClickHouse |
clickhouse.enabled | boolean | false | Enable/disable in-cluster ClickHouse installation |
clickhouse.persistence.size | string | "100Gi" | Storage size for in-cluster ClickHouse |
clickhouse.replicaCount | integer | 1 | Number of ClickHouse replicas |
clickhouse.resourcesPreset | string | "2xlarge" | Resource preset for ClickHouse |
clickhouse.secret.clickhouseDatabase | string | "SOME_CLICKHOUSE_DATABASE" | ClickHouse database name |
clickhouse.secret.clickhousePassword | string | "SOME_CLICKHOUSE_PASSWORD" | ClickHouse password |
clickhouse.secret.clickhouseUrl | string | "SOME_CLICKHOUSE_HOST" | ClickHouse URL |
clickhouse.secret.clickhouseUser | string | "SOME_CLICKHOUSE_USER" | ClickHouse user |
clickhouse.secret.name | string | "clickhouse-secret" | Name of the ClickHouse secret |
clickhouse.service.ports.tcp | integer | 9440 | TCP port for ClickHouse |
clickhouse.service.ports.tcpSecure | integer | 20434 | Secure TCP port for ClickHouse |
clickhouse.shards | integer | 1 | Number of ClickHouse shards |
clickhouse.zookeeper.enabled | boolean | false | Enable/disable ZooKeeper for ClickHouse |
environment | string | "none" | Environment for the deployment, e.g. "dev", "test", "prod" |
imagePullSecret.data | string | "SOME_DOCKERHUB_CREDENTIAL" | Registry credentials in dockerconfigjson format |
imagePullSecret.name | string | "dockerhub-credentials" | Name of the Docker registry credentials secret |
ingester.autoscaling.horizontalPodAutoscaler.enabled | boolean | true | Enable/disable HPA for ingester |
ingester.autoscaling.horizontalPodAutoscaler.maxReplicas | integer | 10 | Maximum number of replicas for HPA |
ingester.autoscaling.horizontalPodAutoscaler.minReplicas | integer | 4 | Minimum number of replicas for HPA |
ingester.autoscaling.horizontalPodAutoscaler.name | string | "metoro-ingester-hpa" | Name of the HPA |
ingester.autoscaling.horizontalPodAutoscaler.targetCPUUtilizationPercentage | integer | 60 | Target CPU utilization percentage for scaling |
ingester.configMap.name | string | "ingester-config" | Name of the ingester ConfigMap |
ingester.image.pullPolicy | string | "IfNotPresent" | Image pull policy for ingester container |
ingester.image.repository | string | "quay.io/metoro/metoro-ingester" | Docker image repository for ingester |
ingester.ingress.annotations | object | {"kubernetes.io/ingress.class": "nginx"} | Ingress annotations |
ingester.ingress.className | string | "nginx" | Ingress class name |
ingester.ingress.enabled | boolean | false | Enable/disable ingress for ingester |
ingester.ingress.hosts[0].host | string | "ingester.local.test" | Ingress hostname |
ingester.ingress.hosts[0].paths[0].path | string | "/" | Path for ingress rule |
ingester.ingress.hosts[0].paths[0].pathType | string | "Prefix" | Path type for ingress rule |
ingester.name | string | "ingester" | Name of the ingester component |
ingester.replicas | integer | 4 | Number of ingester replicas to deploy |
ingester.resources.limits.cpu | string/number | 4 | CPU resource limit for each ingester pod |
ingester.resources.limits.memory | string | "16Gi" | Memory resource limit for each ingester pod |
ingester.resources.requests.cpu | string/number | 1 | Requested CPU resources for each ingester pod |
ingester.resources.requests.memory | string | "2Gi" | Requested memory resources for each ingester pod |
ingester.service.name | string | "ingester" | Name of the ingester service |
ingester.service.port | integer | 80 | Service port for ingester |
ingester.service.targetPort | integer | 8080 | Target port for ingester service |
ingester.service.type | string | "ClusterIP" | Kubernetes service type for ingester |
klaviyoSecret.klaviyoApiKey | string | "SOME_KLAVIYO_API_KEY" | Klaviyo API key, currently not support in on-premises |
klaviyoSecret.name | string | "klaviyo-secret" | Name of the Klaviyo secret, currently not support in on-premises |
onPrem.isOnPrem | boolean | false | Flag for on-premises deployment, should always be set to true on-premises |
pagerDutySecret.name | string | "pagerduty-secret" | Name of the PagerDuty secret, currently not support in on-premises |
pagerDutySecret.pagerDutyClientId | string | "SOME_PAGERDUTY_CLIENT_ID" | PagerDuty client ID, currently not support in on-premises |
pagerDutySecret.pagerDutyClientSecret | string | "SOME_PAGERDUTY_CLIENT_SECRET" | PagerDuty client secret, currently not support in on-premises |
postgresSecret.name | string | "postgres-secret" | Name of the PostgreSQL secret for external database |
postgresSecret.postgresDatabase | string | "SOME_POSTGRES_DATABASE" | PostgreSQL database name for external database |
postgresSecret.postgresHost | string | "SOME_POSTGRES_HOST" | PostgreSQL host for external database |
postgresSecret.postgresPassword | string | "SOME_POSTGRES_PASSWORD" | PostgreSQL password for external database |
postgresSecret.postgresPort | string | "SOME_POSTGRES_PORT" | PostgreSQL port for external database |
postgresSecret.postgresUser | string | "SOME_POSTGRES_USER" | PostgreSQL user for external database |
postgresql.enabled | boolean | false | Enable/disable in-cluster PostgreSQL installation |
postgresql.auth.postgresPassword | string | "CHANGE_ME" | PostgreSQL password for in-cluster database |
postgresql.persistence.size | string | "2Gi" | Storage size for in-cluster PostgreSQL |
slackSecret.name | string | "slack-secret" | Name of the Slack secret, currently not support in on-premises |
slackSecret.slackClientId | string | "SOME_SLACK_CLIENT_ID" | Slack client ID, currently not support in on-premises |
slackSecret.slackClientSecret | string | "SOME_SLACK_CLIENT_SECRET" | Slack client secret, currently not support in on-premises |
stripeSecret.name | string | "stripe-secret" | Name of the Stripe secret, currently not support in on-premises |
stripeSecret.stripeKey | string | "SOME_STRIPE_KEY" | Stripe API key, currently not support in on-premises |
temporal.admintools.image.tag | string | "1.24.2-tctl-1.18.1-cli-0.13.2" | Temporal admin tools image tag |
temporal.cassandra.enabled | boolean | false | Enable/disable Cassandra for Temporal, currently not support in on-premises |
temporal.elasticsearch.enabled | boolean | false | Enable/disable Elasticsearch for Temporal, currently not support in on-premises |
temporal.enabled | boolean | false | Enable/disable Temporal, should be set to true in production |
temporal.grafana.enabled | boolean | false | Enable/disable Grafana for Temporal, currently not support in on-premises |
temporal.mysql.enabled | boolean | false | Enable/disable MySQL for Temporal, currently not support in on-premises |
temporal.postgres.enabled | boolean | true | Enable/disable PostgreSQL for Temporal, should be set to true in production |
temporal.prometheus.enabled | boolean | false | Enable/disable Prometheus for Temporal, currently not support in on-premises |
temporal.schema.setup.enabled | boolean | true | Enable/disable Temporal schema setup, should be set to true in production |
temporal.schema.update.enabled | boolean | true | Enable/disable Temporal schema updates, should be set to true in production |
temporal.server.replicaCount | integer | 1 | Number of Temporal server replicas, should be set to 1 in production, 2 or more for HA |
temporalEndpoint | string | "temporal-frontend:7233" | Temporal service endpoint |
versions.dev.apiserver | string | "0.856.0" | API server version for dev |
versions.dev.isdev | boolean | false | Development version flag |
versions.dev.ingester | string | "0.856.0" | Ingester version for dev |
versions.onprem.apiserver | string | "0.856.0" | API server version for on-prem |
versions.onprem.isonprem | boolean | true | On-premises version flag |
versions.onprem.ingester | string | "0.856.0" | Ingester version for on-prem |
versions.prod.apiserver | string | "0.856.0" | API server version for prod |
versions.prod.isprod | boolean | false | Production version flag |
versions.prod.ingester | string | "0.856.0" | Ingester version for prod |