Sub-processors
Last updated: 8 May 2026
A sub-processor is a third party that processes personal data on our behalf to help us deliver the Service. We use a small number of them, picked for the role they play in the platform and the security commitments they make.
The list below is current as of the date above. We give Customers advance notice before adding a new sub-processor that processes Customer Content, and Customers may object on reasonable grounds. To receive notifications of changes, email privacy@levainlabs.com with the subject "subscribe: sub-processors".
Infrastructure
Amazon Web Services
Purpose. Compute (EKS), object storage (S3), managed database (RDS), key management (KMS), model inference (Bedrock), transactional email (SES), and content delivery (CloudFront). The Service runs on AWS infrastructure end to end.
Data. Customer Content, Account Data, usage and telemetry logs, transactional emails.
Reference. aws.amazon.com/compliance/data-privacy
AI model providers
In production, model inference runs through AWS Bedrock, which hosts foundation models from a range of providers including Anthropic, Meta, Cohere, Mistral AI, AI21 Labs, Stability AI, and Amazon's own model families. Customers may build agents that invoke any model the platform exposes from this catalog, and the corresponding model owner acts as a sub-processor when its model serves a request. Direct provider routes (outside Bedrock) are used in development and when a Customer brings their own provider key (BYOK).
Anthropic
Purpose. Claude model inference. Used via AWS Bedrock for production traffic, and directly for development and BYOK Customer keys.
Data. Prompts and any context the Customer's agents send to Claude, plus the model's outputs. Anthropic does not train its models on data sent through the API.
Reference. anthropic.com/legal/privacy
OpenAI
Purpose. Model inference. Used directly for development and when a Customer brings their own OpenAI key.
Data. Prompts and context the Customer's agents send to OpenAI, plus the model's outputs. OpenAI does not train its models on data sent through the API.
Reference. openai.com/policies/privacy-policy
Other foundation model providers via AWS Bedrock
Purpose. When a Customer's agent invokes a Bedrock-hosted model from a provider other than Anthropic (for example, Meta Llama, Cohere, Mistral, AI21 Labs, or Stability AI), that provider's model serves the request from inside AWS Bedrock. AWS's commercial agreement with each model provider governs how data is handled in that environment.
Data. Prompts and outputs for invocations the Customer routes to the model. Traffic stays within AWS Bedrock and is not sent to the provider's own external API.
Reference. AWS publishes the current Bedrock model catalog at aws.amazon.com/bedrock. The set of providers and models available through Bedrock changes over time; the operative list at any moment is the one AWS publishes there.
Identity and access
WorkOS
Purpose. User authentication, single sign-on (SSO), and directory sync for Customer organisations.
Data. Account Data: name, work email, authentication tokens, and SSO/SCIM directory metadata.
Reference. workos.com/legal/privacy
Payments
Stripe
Purpose. Payment processing, billing, invoicing, tax handling.
Data. Billing contact details, billing address, payment-method information. Card numbers are handled directly by Stripe; we do not store them.
Reference. stripe.com/privacy
Anti-abuse
Google reCAPTCHA Enterprise
Purpose. Bot and abuse detection on the public contact form. Not used inside the authenticated application.
Data. IP address, browser and device signals, interaction signals collected by reCAPTCHA at page load, the email and message you submit through the contact form.
Reference. policies.google.com/privacy
Notes on what is not on this list
The following components are part of the Service but are operated by Levain Labs on our own AWS infrastructure rather than by a third party, so they are not sub-processors:
- Application databases (PostgreSQL on AWS RDS managed by us, ClickHouse self-hosted on EKS, Redis self-hosted on EKS).
- The LiteLLM proxy that routes inference requests.
- Internal observability and logging (the LGTM stack: Loki, Grafana, Tempo, Mimir), self-hosted on EKS.
Third-party services that a Customer connects to their own agents (for example, Slack, GitHub, Intercom, or other integrations chosen by the Customer) are not our sub-processors. The Customer's relationship with those services is governed by the Customer's own agreement with them.
Contact
Questions about this list, requests for change notifications, or objections to a new sub-processor: privacy@levainlabs.com.