Skip to main content
When running LangSmith on Google Cloud Platform (GCP), you can set up in either full self-hosted or hybrid mode. Full self-hosted mode deploys a complete LangSmith platform with observability functionality as well as the option to create agent deployments. Hybrid mode entails just the infrastructure to run agents in a data plane within your cloud, while our SaaS provides the control plane and observability functionality. This page provides GCP-specific architecture patterns, service recommendations, and best practices for deploying and operating LangSmith on GCP.
LangChain provides Terraform modules specifically for GCP to help provision infrastructure for LangSmith. These modules can quickly set up GKE clusters, Cloud SQL, Memorystore Redis, Cloud Storage, and networking resources.View the GCP Terraform modules for documentation and examples.

Reference architecture

We recommend leveraging GCP’s managed services to provide a scalable, secure, and resilient platform. The following architecture applies to both self-hosted and hybrid and aligns with the Google Cloud Well-Architected Framework: Architecture diagram showing GCP relations to LangSmith services
  • Ingress & networking: Requests enter via Cloud Load Balancing within your VPC, secured using Cloud Armor and IAM-based authentication.
  • Frontend & backend services: Containers run on Google Kubernetes Engine (GKE), orchestrated behind the load balancer. Routes requests to other services within the cluster as necessary.
  • Storage & databases:
    • Cloud SQL for PostgreSQL: metadata, projects, users, and short-term and long-term memory for deployed agents. LangSmith supports PostgreSQL version 14 or higher.
    • Memorystore for Redis: caching and job queues. Memorystore can be in single-instance or cluster mode, running Redis OSS version 5 or higher.
    • ClickHouse + Persistent Disks: analytics and trace storage.
    • Cloud Storage: object storage for trace artifacts and telemetry.
  • LLM integration: Optionally proxy requests to Vertex AI for LLM inference.
  • Monitoring & observability: Integrate with Cloud Monitoring and Cloud Logging

Compute options

LangSmith supports multiple compute options depending on your requirements:
Compute optionDescriptionSuitable for
Google Kubernetes Engine (preferred)Advanced scaling and multi-tenant supportLarge enterprises
Compute Engine-basedFull control, BYO-infraRegulated or air-gapped environments

Google cloud Well-Architected best practices

This reference is designed to align with the six pillars of the Google Cloud Well-Architected Framework:

Operational excellence

Security

Reliability

Performance optimization

Cost optimization

Sustainability

Security and compliance

LangSmith can be configured for: Customers can deploy in Assured Workloads regions for compliance with ISO, HIPAA, or other regulatory requirements as needed.

Monitoring and evals

Use LangSmith to:
  • Capture traces from LLM apps running on Vertex AI.
  • Evaluate model outputs via LangSmith datasets.
  • Track latency, token usage, and success rates.
Integrate with:
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.