Metoro uses artificial intelligence to help you understand and debug your infrastructure more efficiently. This page explains how we use AI, which models and providers power our features, and our commitments to protecting your data.
By using AI features, you agree to our AI Terms of Service.
AI features are strictly opt-in. Customers must explicitly enable AI features before any customer data is processed by AI.
Customers have the ability to disable AI features at any time.
| Provider | Model | Purpose | Privacy Policy |
|---|---|---|---|
| Anthropic | Claude | Primary AI provider | View Policy |
| OpenAI | GPT family | AI provider | View Policy |
| AWS Bedrock | Anthropic models | AI provider with customer-owned AWS keys | View Policy |
We only work with AI vendors that commit not to use customer data for training, advertising, or any purpose other than the requested inference.
Only the minimal telemetry and diagnostic data necessary to fulfill the AI use case is sent to AI models.
All data transfers to AI vendors use strong encryption (TLS).
Any AI-generated output that may impact customer systems (e.g., configuration changes, code fixes) must be reviewed by a qualified human engineer before being applied.
The following types of telemetry and diagnostic data may be sent to our AI providers to power Metoro's intelligent features:
| Data Type | Description |
|---|---|
| Logs | Application and system log entries |
| Metrics | Performance and resource utilization metrics |
| Traces | Distributed tracing data |
| Profiling | Application profiling and performance data |
| Configuration Metadata | Infrastructure and application configuration information |
| Cluster Information | Kubernetes cluster topology and resource data |
| Code | Source code analyzed to identify and diagnose bugs (only if GitHub integration is enabled) |
This diagram shows how customer data flows through our AI system, from trigger events to final storage.
AI agents are created in response to specific events: alerts firing, anomaly detection, new deployments, or cluster events. These triggers initiate the AI analysis process.
Each AI agent runs in an isolated, sandboxed container with no general internet access. The agent can only use tools explicitly provided by Metoro, ensuring controlled and secure operation.
The agent communicates with an LLM provider in a loop until the task is complete. Three options are available:
All communications are TLS encrypted, and LLM providers are contractually bound to not train on your data.
The LLM decides what telemetry data is needed to complete its task. It can access logs, metrics, traces, profiling information, and cluster metadata from Metoro's Telemetry Store. If the optional GitHub integration is enabled, the agent can also read source code for debugging purposes.
All connections use TLS encryption in transit
All stored data is encrypted at rest
LLM providers do not train on your data
Agents have no general internet access
The output from AI analysis is stored in Metoro's Metadata Store, which is also encrypted at rest and accessed via TLS. This allows you to review AI-generated insights and recommendations.
For questions, requests, or concerns related to AI use, please contact us at security@metoro.io
We may update this page from time to time as we add new AI capabilities or change providers.
Based on AI Use Policy Version 1.0 (Approved November 20, 2025)