AI Usage

Metoro uses artificial intelligence to help you understand and debug your infrastructure more efficiently. This page explains how we use AI, which models and providers power our features, and our commitments to protecting your data.

By using AI features, you agree to our AI Terms of Service.

Customer Control

Opt-In Only

AI features are strictly opt-in. Customers must explicitly enable AI features before any customer data is processed by AI.

Disable Anytime

Customers have the ability to disable AI features at any time.

AI Models & Providers

ProviderModelPurposePrivacy Policy
AnthropicClaudePrimary AI providerView Policy
OpenAIGPT familyAI providerView Policy
AWS BedrockAnthropic modelsAI provider with customer-owned AWS keysView Policy

Data Privacy & Security

No Training on Customer Data

We only work with AI vendors that commit not to use customer data for training, advertising, or any purpose other than the requested inference.

Data Minimization

Only the minimal telemetry and diagnostic data necessary to fulfill the AI use case is sent to AI models.

Encryption

All data transfers to AI vendors use strong encryption (TLS).

Human Oversight

Any AI-generated output that may impact customer systems (e.g., configuration changes, code fixes) must be reviewed by a qualified human engineer before being applied.

Data Types Sent to AI

The following types of telemetry and diagnostic data may be sent to our AI providers to power Metoro's intelligent features:

Data TypeDescription
LogsApplication and system log entries
MetricsPerformance and resource utilization metrics
TracesDistributed tracing data
ProfilingApplication profiling and performance data
Configuration MetadataInfrastructure and application configuration information
Cluster InformationKubernetes cluster topology and resource data
CodeSource code analyzed to identify and diagnose bugs (only if GitHub integration is enabled)

How AI Data Processing Works

This diagram shows how customer data flows through our AI system, from trigger events to final storage.

Loading diagram...

Understanding the Data Flow

1. Trigger Conditions

AI agents are created in response to specific events: alerts firing, anomaly detection, new deployments, or cluster events. These triggers initiate the AI analysis process.

2. Sandboxed Agent Execution

Each AI agent runs in an isolated, sandboxed container with no general internet access. The agent can only use tools explicitly provided by Metoro, ensuring controlled and secure operation.

3. LLM Provider Selection

The agent communicates with an LLM provider in a loop until the task is complete. Three options are available:

  • Customer-controlled Bedrock - Use your own AWS keys for full control
  • Anthropic - Via Metoro's account
  • AWS Bedrock - Via Metoro's account

All communications are TLS encrypted, and LLM providers are contractually bound to not train on your data.

4. Data Access

The LLM decides what telemetry data is needed to complete its task. It can access logs, metrics, traces, profiling information, and cluster metadata from Metoro's Telemetry Store. If the optional GitHub integration is enabled, the agent can also read source code for debugging purposes.

5. Security Guarantees

🔒
TLS Encryption

All connections use TLS encryption in transit

🔐
Encrypted at Rest

All stored data is encrypted at rest

🚫
No Training

LLM providers do not train on your data

📦
Sandboxed Execution

Agents have no general internet access

6. Result Storage

The output from AI analysis is stored in Metoro's Metadata Store, which is also encrypted at rest and accessed via TLS. This allows you to review AI-generated insights and recommendations.

Contact

For questions, requests, or concerns related to AI use, please contact us at security@metoro.io

Updates to This Page

We may update this page from time to time as we add new AI capabilities or change providers.

Based on AI Use Policy Version 1.0 (Approved November 20, 2025)

Related Documents