Skip to main content
Google Cloud is a fully supported deployment platform for Tensor9 appliances. Deploying to Google Cloud customer environments provides access to Google’s global infrastructure with enterprise-grade security, scalability, and integration with customers’ existing Google Cloud resources.

Overview

When you deploy an application to Google Cloud customer environments using Tensor9:
  • Customer appliances run entirely within the customer’s Google Cloud project
  • Your control plane orchestrates deployments from your dedicated Tensor9 AWS account
  • Service account impersonation enables your control plane to manage customer appliances with customer-approved permissions
  • Service equivalents compile your origin stack into Google Cloud-native resources
Google Cloud appliances leverage Google Cloud services for compute, storage, networking, and observability, providing enterprise-grade infrastructure that integrates seamlessly with your customers’ existing Google Cloud environments.

Prerequisites

Before deploying appliances to Google Cloud customer environments, ensure:

Your control plane

  • Dedicated AWS account for your Tensor9 control plane
  • Control plane installed - See Installing Tensor9
  • Origin stack published - Your application infrastructure defined and uploaded

Customer Google Cloud project

Your customers must provide:
  • Google Cloud project where the appliance will be deployed
  • Service accounts configured for the four-phase permissions model (Install, Steady-state, Deploy, Operate)
  • VPC and networking configured according to their requirements
  • Sufficient API quotas for your application’s resource needs
  • Google Cloud region where they want the appliance deployed

Your development environment

  • gcloud CLI installed and configured
  • Terraform or OpenTofu (if using Terraform origin stacks)
  • Docker (if deploying container-based applications)

How Google Cloud appliances work

Google Cloud appliances are deployed using Google Cloud-native services orchestrated by your Tensor9 control plane.
1

Customer provisions service accounts

Your customer creates four service accounts in their Google Cloud project, each corresponding to a permission phase: Install, Steady-state, Deploy, and Operate. These service accounts define what your control plane can do within their environment.The customer configures IAM policies that allow your control plane’s service account to impersonate these service accounts with appropriate conditions (time windows, approval labels, etc.).
2

You create a release for the customer appliance

You create a release targeting the customer’s appliance:
tensor9 stack release create \
  -appName my-app \
  -customerName acme-corp \
  -vendorVersion "1.0.0" \
  -description "Initial production deployment"
Your control plane compiles your origin stack into a deployment stack tailored for Google Cloud, compiling any non-Google Cloud resources to their Google Cloud service equivalents. The deployment stack downloads to your local environment.
3

Customer grants deploy access

The customer approves the deployment by granting temporary deploy access. This can be manual (updating IAM policy conditions) or automated (scheduled maintenance windows).Once approved, the Tensor9 controller in the appliance can impersonate the Deploy service account in the customer’s project.
4

You deploy the release

You run the deployment locally against the downloaded deployment stack:
cd acme-corp-production
tofu init
tofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller inside the customer’s appliance. The controller impersonates the Deploy service account and creates all infrastructure resources in the customer’s Google Cloud project:
  • VPCs, subnets, firewall rules
  • Compute Engine instances, GKE clusters, Cloud Functions
  • Cloud SQL databases, Cloud Storage buckets, Memorystore clusters
  • Cloud Logging log sinks, service accounts, Cloud DNS records
  • Any other Google Cloud resources defined in your origin stack
5

Steady-state observability begins

After deployment, your control plane uses the Steady-state service account to continuously collect observability data (logs, metrics, traces) from the customer’s appliance without requiring additional approvals.This data flows to your observability sink, giving you visibility into appliance health and performance.

Service equivalents

When you deploy an origin stack to Google Cloud customer environments, Tensor9 automatically compiles resources from other cloud providers to their Google Cloud equivalents. This allows you to maintain a single origin stack and deploy it across different customer environments.

How service equivalents work

When compiling a deployment stack for Google Cloud:
  1. AWS resources are compiled - AWS resources are converted to their Google Cloud equivalents
  2. Generic resources are adapted - Cloud-agnostic resources (like Kubernetes manifests) are adapted for Google Cloud
  3. Configuration is adjusted - Resource configurations are modified to match Google Cloud conventions and best practices

Common service equivalents

Service CategoryAWSGoogle Cloud Equivalent
ComputeECS FargateCloud Run
LambdaCloud Functions
EKSGKE (Kubernetes)
StorageS3Cloud Storage
EBSPersistent Disk
DatabaseRDS PostgreSQLCloud SQL (PostgreSQL)
RDS Aurora MySQL, RDS MySQLCloud SQL (MySQL)
ElastiCache RedisMemorystore (Redis)
NetworkingVPCVPC
ALB/NLB/CLBCloud Load Balancing
NAT GatewayCloud NAT
Route 53Cloud DNS
SecurityKMSCloud KMS
IAM RolesIAM Service Accounts
ObservabilityCloudWatch LogsCloud Logging
CloudWatch MetricsCloud Monitoring
X-RayCloud Trace
Some popular AWS services (EC2, DynamoDB, EFS) are not currently supported. See Unsupported AWS services for the full list and recommended alternatives.

Example: Compiling an AWS origin stack

If your origin stack defines a Lambda function:
# Origin stack (AWS)
resource "aws_lambda_function" "api" {
  function_name = "myapp-api-${var.instance_id}"
  handler       = "index.handler"
  runtime       = "nodejs18.x"
  role          = aws_iam_role.api_role.arn

  environment {
    variables = {
      INSTANCE_ID = var.instance_id
    }
  }
}
Tensor9 compiles it to a Cloud Function for Google Cloud:
# Deployment stack (Google Cloud)
resource "google_cloudfunctions2_function" "api" {
  name     = "myapp-api-${var.instance_id}"
  location = var.region
  project  = var.project_id

  build_config {
    runtime     = "nodejs18"
    entry_point = "handler"
  }

  service_config {
    environment_variables = {
      INSTANCE_ID = var.instance_id
    }

    service_account_email = google_service_account.api.email
  }
}

Permissions model

Google Cloud appliances use a four-phase service account permissions model that balances operational capability with customer control.

The four permission phases

PhaseService AccountPurposeAccess Pattern
Install[email protected]Initial setup, major infrastructure changesCustomer-approved, rare
Steady-state[email protected]Continuous observability collection (read-only)Active by default
Deploy[email protected]Deployments, updates, configuration changesCustomer-approved, time-bounded
Operate[email protected]Remote operations, troubleshooting, debuggingCustomer-approved, time-bounded

Service account structure

Each service account is created in the customer’s Google Cloud project with IAM policies that allow your control plane to impersonate it. Example: Deploy service account with conditional access
# Deploy service account
resource "google_service_account" "deploy" {
  account_id   = "tensor9-deploy-${var.instance_id}"
  display_name = "Tensor9 Deploy Service Account"
  project      = var.customer_project_id
}

# IAM binding allowing vendor control plane to impersonate
resource "google_service_account_iam_binding" "deploy_impersonation" {
  service_account_id = google_service_account.deploy.name
  role               = "roles/iam.serviceAccountTokenCreator"

  members = [
    "serviceAccount:[email protected]"
  ]

  condition {
    title       = "Deploy access time window"
    description = "Allow impersonation during approved time window"
    expression  = <<-EOT
      request.time >= timestamp("2024-01-01T00:00:00Z") &&
      request.time <= timestamp("2024-12-31T23:59:59Z") &&
      resource.labels.deploy_access == "enabled"
    EOT
  }
}

# Grant Deploy service account permissions in customer project
resource "google_project_iam_member" "deploy_compute" {
  project = var.customer_project_id
  role    = "roles/compute.instanceAdmin.v1"
  member  = "serviceAccount:${google_service_account.deploy.email}"
}

resource "google_project_iam_member" "deploy_storage" {
  project = var.customer_project_id
  role    = "roles/storage.admin"
  member  = "serviceAccount:${google_service_account.deploy.email}"
}
Your control plane can only impersonate the Deploy service account when:
  • The deploy_access label is set to “enabled”
  • The current time is within the allowed window
Customers control when and how long deploy access is granted. Example: Steady-state service account (read-only observability)
# Steady-state service account
resource "google_service_account" "steadystate" {
  account_id   = "tensor9-steadystate-${var.instance_id}"
  display_name = "Tensor9 Steady-State Service Account"
  project      = var.customer_project_id
}

# Allow vendor control plane to impersonate (no time restriction)
resource "google_service_account_iam_binding" "steadystate_impersonation" {
  service_account_id = google_service_account.steadystate.name
  role               = "roles/iam.serviceAccountTokenCreator"

  members = [
    "serviceAccount:[email protected]"
  ]
}

# Grant read-only permissions scoped to appliance resources
resource "google_project_iam_member" "steadystate_logging_viewer" {
  project = var.customer_project_id
  role    = "roles/logging.viewer"
  member  = "serviceAccount:${google_service_account.steadystate.email}"

  condition {
    title       = "Filter by instance ID"
    description = "Only access resources with matching instance-id label"
    expression  = "resource.labels.instance_id == '${var.instance_id}'"
  }
}

resource "google_project_iam_member" "steadystate_monitoring_viewer" {
  project = var.customer_project_id
  role    = "roles/monitoring.viewer"
  member  = "serviceAccount:${google_service_account.steadystate.email}"

  condition {
    title       = "Filter by instance ID"
    description = "Only access resources with matching instance-id label"
    expression  = "resource.labels.instance_id == '${var.instance_id}'"
  }
}
The Steady-state service account:
  • Can read observability data from resources labeled with the appliance’s instance-id
  • Cannot modify, delete, or terminate any resources
  • Cannot change IAM policies

Deployment workflow with service accounts

1

Customer grants deploy access

Customer approves a deployment by setting the deploy_access label to “enabled” and defining a time window. This can be done manually or through automated approval workflows.
2

You execute deployment locally

You run the deployment locally against the downloaded deployment stack:
cd acme-corp-production
tofu init
tofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller in the appliance.
3

Controller impersonates Deploy service account and creates resources

For each resource Terraform attempts to create, the Tensor9 controller inside the appliance impersonates the Deploy service account and creates the resource in the customer’s project.All infrastructure changes occur within the customer’s project using their Deploy service account permissions.
4

Deploy access expires

After the time window expires, the Deploy service account can no longer be impersonated. Your control plane automatically reverts to using only the Steady-state service account for observability.
See Permissions Model for detailed information on all four phases.

Networking

Google Cloud appliances use an isolated networking architecture with a Tensor9 controller that manages communication with your control plane.

Tensor9 controller VPC

When an appliance is deployed, Tensor9 creates an isolated VPC containing the Tensor9 controller. This VPC is configured with:
  • Cloud NAT: Provides outbound internet connectivity
  • Route to control plane: Establishes a secure channel to your Tensor9 control plane
  • No ingress firewall rules: The controller VPC does not accept inbound connections - all communication is outbound-only
The Tensor9 controller uses this secure channel to:
  • Receive deployments: Deployment stacks are pushed from your control plane to the appliance
  • Configure observability pipeline: Set up log, metric, and trace forwarding to your observability sink
  • Receive operational commands: Execute remote operations initiated from your control plane

Outbound-only security model

The Tensor9 controller in your customer’s appliance is designed to only make outbound connections and not require ingress ports to be opened in your customer’s network perimeter:
# Example: Controller VPC configuration (managed by Tensor9)
resource "google_compute_network" "tensor9_controller" {
  name                    = "tensor9-controller-${var.instance_id}"
  auto_create_subnetworks = false
  project                 = var.customer_project_id
}

resource "google_compute_subnetwork" "controller" {
  name          = "tensor9-controller-subnet-${var.instance_id}"
  ip_cidr_range = "10.0.0.0/24"
  region        = var.region
  network       = google_compute_network.tensor9_controller.id
  project       = var.customer_project_id
}

# Cloud Router for NAT
resource "google_compute_router" "controller" {
  name    = "tensor9-controller-router-${var.instance_id}"
  region  = var.region
  network = google_compute_network.tensor9_controller.id
  project = var.customer_project_id
}

# Cloud NAT for outbound connectivity
resource "google_compute_router_nat" "controller" {
  name                               = "tensor9-controller-nat-${var.instance_id}"
  router                             = google_compute_router.controller.name
  region                             = var.region
  nat_ip_allocate_option             = "AUTO_ONLY"
  source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
  project                            = var.customer_project_id
}

# Firewall: egress only, no ingress
resource "google_compute_firewall" "controller_egress" {
  name    = "tensor9-controller-egress-${var.instance_id}"
  network = google_compute_network.tensor9_controller.name
  project = var.customer_project_id

  allow {
    protocol = "tcp"
    ports    = ["443"]
  }

  direction          = "EGRESS"
  destination_ranges = ["0.0.0.0/0"]
}

# No ingress firewall rules - controller never accepts inbound connections
This architecture ensures that the customer’s appliance cannot be compromised via inbound network attacks on the controller.

Application VPC topology

Your application resources run in their own VPC(s), completely separate from the Tensor9 controller VPC. The application VPC topology is defined entirely by your origin stack - whatever VPC resources you define in your origin stack will be deployed into the appliance. Example: Application VPC with internet-facing load balancer If your origin stack defines an AWS VPC with public subnets and a load balancer, that topology will compile to Google Cloud VPC resources in the customer’s appliance:
# AWS origin stack - Application VPC
resource "aws_vpc" "application" {
  cidr_block           = "10.1.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name          = "myapp-vpc-${var.instance_id}"
    "instance-id" = var.instance_id
  }
}

# Public subnet for load balancer
resource "aws_subnet" "public" {
  vpc_id                  = aws_vpc.application.id
  cidr_block              = "10.1.0.0/24"
  availability_zone       = data.aws_availability_zones.available.names[0]
  map_public_ip_on_launch = true

  tags = {
    Name          = "myapp-public-${var.instance_id}"
    "instance-id" = var.instance_id
  }
}

# Private subnet for application servers
resource "aws_subnet" "private" {
  vpc_id            = aws_vpc.application.id
  cidr_block        = "10.1.1.0/24"
  availability_zone = data.aws_availability_zones.available.names[0]

  tags = {
    Name          = "myapp-private-${var.instance_id}"
    "instance-id" = var.instance_id
  }
}

# NAT Gateway for private subnet outbound access
resource "aws_eip" "nat" {
  domain = "vpc"

  tags = {
    Name          = "myapp-nat-${var.instance_id}"
    "instance-id" = var.instance_id
  }
}

resource "aws_nat_gateway" "application" {
  allocation_id = aws_eip.nat.id
  subnet_id     = aws_subnet.public.id

  tags = {
    Name          = "myapp-nat-${var.instance_id}"
    "instance-id" = var.instance_id
  }
}

# Application Load Balancer
resource "aws_lb" "application" {
  name               = "myapp-lb-${var.instance_id}"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.lb.id]
  subnets            = [aws_subnet.public.id]

  tags = {
    "instance-id" = var.instance_id
  }
}
This application VPC topology is deployed alongside the Tensor9 controller VPC, but they remain completely separate. The controller VPC manages the control plane connection, while the application VPC handles your application’s traffic and resources.

Resource naming and labeling

All Google Cloud resources should use the instance_id variable to ensure uniqueness across multiple customer appliances.

Parameterization pattern

variable "instance_id" {
  type        = string
  description = "Uniquely identifies the instance to deploy into"
}

# Cloud Storage buckets
resource "google_storage_bucket" "data" {
  name     = "myapp-data-${var.instance_id}"
  location = var.region
  project  = var.project_id
}

# Cloud SQL databases
resource "google_sql_database_instance" "postgres" {
  name             = "myapp-db-${var.instance_id}"
  database_version = "POSTGRES_15"
  region           = var.region
  project          = var.project_id

  settings {
    tier = "db-f1-micro"
  }
}

# Cloud Functions
resource "google_cloudfunctions2_function" "api" {
  name     = "myapp-api-${var.instance_id}"
  location = var.region
  project  = var.project_id

  build_config {
    runtime     = "nodejs20"
    entry_point = "handleRequest"
  }
}

# Service accounts
resource "google_service_account" "app" {
  account_id   = "myapp-${var.instance_id}"
  display_name = "Application Service Account"
  project      = var.project_id
}

Required labels

Label all resources with instance-id to enable permissions scoping and observability:
resource "google_storage_bucket" "data" {
  name     = "myapp-data-${var.instance_id}"
  location = var.region
  project  = var.project_id

  labels = {
    instance-id  = var.instance_id
    application  = "my-app"
    managed-by   = "tensor9"
  }
}

resource "google_compute_instance" "app" {
  name         = "myapp-instance-${var.instance_id}"
  machine_type = "e2-medium"
  zone         = "${var.region}-a"
  project      = var.project_id

  labels = {
    instance-id = var.instance_id
    application = "my-app"
    managed-by  = "tensor9"
  }
}
The instance-id label:
  • Enables IAM condition expressions to scope permissions to specific appliances
  • Allows Cloud Logging filters to isolate telemetry by appliance
  • Helps customers track costs per appliance
  • Facilitates resource discovery by Tensor9 controllers

Observability

Google Cloud appliances provide comprehensive observability through Cloud Logging, Cloud Monitoring, and Cloud Trace.

Cloud Logging

Application and infrastructure logs flow to Cloud Logging:
# Log sink for application logs
resource "google_logging_project_sink" "app_logs" {
  name        = "myapp-logs-${var.instance_id}"
  destination = "storage.googleapis.com/${google_storage_bucket.logs.name}"
  filter      = "labels.instance-id=\"${var.instance_id}\""
  project     = var.project_id

  unique_writer_identity = true
}

# Grant sink service account permissions
resource "google_storage_bucket_iam_member" "logs_writer" {
  bucket = google_storage_bucket.logs.name
  role   = "roles/storage.objectCreator"
  member = google_logging_project_sink.app_logs.writer_identity
}
Your control plane uses the Steady-state service account to continuously fetch logs:
gcloud logging read \
  "labels.instance-id=\"000000007e\"" \
  --limit 100 \
  --format json \
  --project customer-project-id \
  --impersonate-service-account [email protected]
Logs are forwarded to your observability sink for centralized monitoring.

Cloud Monitoring

Infrastructure metrics are automatically collected:
  • Compute Engine: CPU utilization, network I/O, disk I/O
  • Cloud SQL: Database connections, query latency, storage usage
  • Cloud Functions: Invocations, execution time, errors
  • GKE: Node CPU/memory, pod counts, API server metrics
  • Cloud Load Balancing: Request counts, latency, HTTP status codes

Cloud Trace

Enable distributed tracing for Cloud Functions and containerized applications:
resource "google_cloudfunctions2_function" "api" {
  name     = "myapp-api-${var.instance_id}"
  location = var.region
  project  = var.project_id

  build_config {
    runtime     = "nodejs20"
    entry_point = "handleRequest"
  }

  service_config {
    environment_variables = {
      INSTANCE_ID                  = var.instance_id
      GOOGLE_CLOUD_TRACE_ENABLED   = "true"
      GOOGLE_CLOUD_TRACE_NEW_CONTEXT = "true"
    }
  }
}
Cloud Trace data is accessible through the Steady-state service account and forwarded to your observability sink.

Cloud Audit Logs

All API calls within the customer’s Google Cloud project are logged to Cloud Audit Logs, providing a complete audit trail of what your control plane does:
  • Service account impersonations
  • Resource creation, modification, deletion
  • Permission denials
  • Configuration changes
Customers have full visibility into your control plane’s actions through their Cloud Audit Logs.

Artifacts

Google Cloud appliances automatically provision private artifact repositories to store container images and application files deployed by your deployment stacks.

Container images (Artifact Registry)

When you deploy an appliance, Tensor9 automatically provisions a private Artifact Registry repository in the customer’s Google Cloud project to store your container images. Example: Origin stack with container service Your AWS origin stack references container images from your vendor’s Amazon ECR:
# ECS Fargate service in your origin stack
resource "aws_ecs_task_definition" "api" {
  family                   = "myapp-api-${var.instance_id}"
  requires_compatibilities = ["FARGATE"]
  network_mode             = "awsvpc"
  cpu                      = "256"
  memory                   = "512"

  container_definitions = jsonencode([
    {
      name  = "api"
      # Reference to your vendor ECR registry
      image = "123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/api:1.0.0"
      portMappings = [
        {
          containerPort = 8080
          protocol      = "tcp"
        }
      ]
      environment = [
        {
          name  = "INSTANCE_ID"
          value = var.instance_id
        }
      ]
    }
  ])

  tags = {
    "instance-id" = var.instance_id
  }
}
Container copy during deployment When you deploy the deployment stack, Tensor9 automatically:
  1. Detects the container image reference in your ECS task definition
  2. Provisions a private Artifact Registry repository in the appliance (e.g., us-docker.pkg.dev/customer-project/myapp-000000007e/api)
  3. Copies the container image from your vendor ECR registry to the appliance’s private Artifact Registry
  4. Rewrites the deployment stack to reference the appliance-local registry
The compiled deployment stack will contain a Cloud Run service with the rewritten image reference:
template {
  containers {
    # Rewritten to reference appliance's private Artifact Registry
    image = "us-docker.pkg.dev/customer-project/myapp-000000007e/api:1.0.0"
    # ... rest of configuration compiled from ECS task definition
  }
}
This ensures the container image is stored locally in the customer’s project and the application doesn’t depend on cross-project access to your vendor registry. Artifact lifecycle Container artifacts are tied to the deployment stack lifecycle:
  • Deploy (tofu apply): Tensor9 copies the container image from your vendor registry to the appliance’s private registry
  • Destroy (tofu destroy): Deleting the deployment stack also deletes the copied container artifact from the appliance’s private registry
This ensures that artifacts are cleaned up when deployments are removed, preventing orphaned resources.

Function source code

For Lambda functions in your AWS origin stack, Tensor9 automatically handles copying function source code to the customer’s Google Cloud environment:
# Lambda function in your AWS origin stack
resource "aws_lambda_function" "processor" {
  function_name = "myapp-processor-${var.instance_id}"
  handler       = "processor.process_event"
  runtime       = "python3.11"
  role          = aws_iam_role.processor.arn

  # Reference to function code in your vendor S3 bucket
  s3_bucket = "vendor-lambda-sources"
  s3_key    = "processor-v1.0.0.zip"

  environment {
    variables = {
      INSTANCE_ID = var.instance_id
    }
  }

  tags = {
    "instance-id" = var.instance_id
  }
}
During deployment, Tensor9:
  1. Provisions a private Cloud Storage bucket in the appliance for function sources
  2. Copies the Lambda source archive from your vendor S3 bucket to the appliance’s Cloud Storage bucket
  3. Compiles the Lambda function to a Cloud Function with the appliance-local source reference
Like container images, destroying the deployment stack (tofu destroy) removes the copied function source archives. See Artifacts for comprehensive documentation on artifact management, including immutability requirements and supported artifact types.

Secrets management

Store secrets in AWS Secrets Manager or AWS Systems Manager Parameter Store in your AWS origin stack, then pass them to your application as environment variables.

Secret naming and injection

Always use parameterized secret names and inject them as environment variables:
# AWS Secrets Manager secret
resource "aws_secretsmanager_secret" "db_password" {
  name = "${var.instance_id}/prod/db/password"

  tags = {
    "instance-id" = var.instance_id
  }
}

resource "aws_secretsmanager_secret_version" "db_password" {
  secret_id     = aws_secretsmanager_secret.db_password.id
  secret_string = var.db_password
}

# ECS Fargate task - inject secret as environment variable
resource "aws_ecs_task_definition" "app" {
  family = "myapp-${var.instance_id}"

  container_definitions = jsonencode([
    {
      name  = "app"
      image = "myapp:latest"

      # Inject secret as environment variable
      secrets = [
        {
          name      = "DB_PASSWORD"
          valueFrom = aws_secretsmanager_secret.db_password.arn
        }
      ]
    }
  ])

  tags = {
    "instance-id" = var.instance_id
  }
}
Your application reads secrets from environment variables:
import os

# Read secret from environment variable
db_password = os.environ['DB_PASSWORD']
If your application dynamically fetches secrets using AWS SDK calls (e.g., boto3.client('secretsmanager').get_secret_value()), those calls will NOT be automatically mapped by Tensor9. Always pass secrets as environment variables.
See Secrets for detailed secret management patterns.

Operations

Perform remote operations on Google Cloud appliances using the Operate service account.

kubectl on GKE

Execute kubectl commands against GKE clusters:
tensor9 ops kubectl \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "google_container_cluster.main" \
  -command "kubectl get pods -n my-app-namespace"
Output:
NAME                     READY   STATUS    RESTARTS   AGE
api-7d9f8b5c6d-9k2lm    1/1     Running   0          2h
worker-5c8d7b4f3-8h4km  1/1     Running   0          2h

gcloud CLI operations

Execute gcloud commands:
# List Cloud Storage bucket contents
tensor9 ops gcloud \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "google_storage_bucket.data" \
  -command "gcloud storage ls gs://myapp-data-000000007e/"

# Invoke Cloud Function
tensor9 ops gcloud \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "google_cloudfunctions2_function.api" \
  -command "gcloud functions call myapp-api-000000007e --data '{\"test\":true}'"

# View Cloud SQL status
tensor9 ops gcloud \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "google_sql_database_instance.postgres" \
  -command "gcloud sql instances describe myapp-db-000000007e"

Database queries

Execute SQL queries against Cloud SQL databases:
tensor9 ops db \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "google_sql_database_instance.postgres" \
  -command "SELECT count(*) FROM users WHERE created_at > NOW() - INTERVAL '24 hours'"

Operations endpoints

Create temporary operations endpoints for interactive access:
# Create kubectl endpoint
tensor9 ops endpoint create \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "google_container_cluster.main" \
  -endpointType kubectl \
  -ttl 3600

# Output:
# Endpoint created: https://ops.tensor9.io/kubectl/abc123
# Expires in: 1 hour
# Use: kubectl --server=https://ops.tensor9.io/kubectl/abc123 get pods
See Operations for comprehensive operations documentation.

Example: Complete Google Cloud appliance

Here’s a complete example of a deployment stack for a Google Cloud appliance, compiled from an AWS origin stack:

main.tf

# VPC
resource "google_compute_network" "main" {
  name                    = "myapp-vpc-${var.instance_id}"
  auto_create_subnetworks = false
  project                 = var.project_id
}

# Subnets
resource "google_compute_subnetwork" "private" {
  name          = "myapp-private-${var.instance_id}"
  ip_cidr_range = "10.0.1.0/24"
  region        = var.region
  network       = google_compute_network.main.id
  project       = var.project_id

  secondary_ip_range {
    range_name    = "pods"
    ip_cidr_range = "10.1.0.0/16"
  }

  secondary_ip_range {
    range_name    = "services"
    ip_cidr_range = "10.2.0.0/16"
  }
}

# GKE Cluster
resource "google_container_cluster" "main" {
  name     = "myapp-cluster-${var.instance_id}"
  location = var.region
  project  = var.project_id

  network    = google_compute_network.main.name
  subnetwork = google_compute_subnetwork.private.name

  ip_allocation_policy {
    cluster_secondary_range_name  = "pods"
    services_secondary_range_name = "services"
  }

  logging_config {
    enable_components = ["SYSTEM_COMPONENTS", "WORKLOADS"]
  }

  monitoring_config {
    enable_components = ["SYSTEM_COMPONENTS"]
    managed_prometheus {
      enabled = true
    }
  }

  resource_labels = {
    instance-id = var.instance_id
  }
}

# Cloud SQL PostgreSQL
resource "google_sql_database_instance" "postgres" {
  name             = "myapp-db-${var.instance_id}"
  database_version = "POSTGRES_15"
  region           = var.region
  project          = var.project_id

  settings {
    tier              = "db-f1-micro"
    availability_type = "REGIONAL"
    disk_size         = 20

    backup_configuration {
      enabled    = true
      start_time = "03:00"
    }

    ip_configuration {
      ipv4_enabled    = false
      private_network = google_compute_network.main.id
    }

    user_labels = {
      instance-id = var.instance_id
    }
  }

  deletion_protection = false
}

resource "google_sql_database" "main" {
  name     = "myapp"
  instance = google_sql_database_instance.postgres.name
  project  = var.project_id
}

resource "google_sql_user" "main" {
  name     = "admin"
  instance = google_sql_database_instance.postgres.name
  password = var.db_password
  project  = var.project_id
}

# Cloud Storage bucket
resource "google_storage_bucket" "data" {
  name     = "myapp-data-${var.instance_id}"
  location = var.region
  project  = var.project_id

  versioning {
    enabled = true
  }

  labels = {
    instance-id = var.instance_id
  }
}

# Memorystore Redis
resource "google_redis_instance" "cache" {
  name           = "myapp-redis-${var.instance_id}"
  tier           = "BASIC"
  memory_size_gb = 1
  region         = var.region
  project        = var.project_id

  redis_version     = "REDIS_7_0"
  authorized_network = google_compute_network.main.id

  labels = {
    instance-id = var.instance_id
  }
}

# Secrets (AWS Secrets Manager - proxied by Tensor9)
resource "aws_secretsmanager_secret" "db_password" {
  name = "${var.instance_id}/prod/db/password"

  tags = {
    "instance-id" = var.instance_id
  }
}

resource "aws_secretsmanager_secret_version" "db_password" {
  secret_id     = aws_secretsmanager_secret.db_password.id
  secret_string = var.db_password
}

variables.tf

variable "instance_id" {
  type        = string
  description = "Uniquely identifies the instance to deploy into"
}

variable "project_id" {
  type        = string
  description = "Google Cloud project ID"
}

variable "db_password" {
  type        = string
  description = "Database master password"
  sensitive   = true
}

variable "region" {
  type        = string
  description = "Google Cloud region"
  default     = "us-central1"
}

outputs.tf

output "gke_cluster_endpoint" {
  description = "GKE cluster endpoint"
  value       = google_container_cluster.main.endpoint
  sensitive   = true
}

output "database_connection" {
  description = "Cloud SQL connection name"
  value       = google_sql_database_instance.postgres.connection_name
  sensitive   = true
}

output "redis_host" {
  description = "Redis instance host"
  value       = google_redis_instance.cache.host
}

output "data_bucket" {
  description = "Cloud Storage bucket name"
  value       = google_storage_bucket.data.name
}

Best practices

Every Google Cloud resource with a name or identifier should include ${var.instance_id} to prevent conflicts across customer appliances:
# ✓ CORRECT
resource "google_storage_bucket" "data" {
  name = "myapp-data-${var.instance_id}"
}

resource "google_service_account" "app" {
  account_id = "myapp-${var.instance_id}"
}

# ✗ INCORRECT - Will cause collisions
resource "google_storage_bucket" "data" {
  name = "myapp-data"
}
Apply the instance-id label to every resource:
labels = {
  instance-id = var.instance_id
}
This enables:
  • IAM condition expressions for permission scoping
  • Cloud Logging and Monitoring filtering
  • Cost tracking
  • Resource discovery
Configure logging for Cloud Functions, GKE, Cloud SQL, and other services:
# GKE logging
logging_config {
  enable_components = ["SYSTEM_COMPONENTS", "WORKLOADS"]
}

# Cloud Function logging (enabled by default)
# Cloud SQL logging
settings {
  database_flags {
    name  = "log_statement"
    value = "all"
  }
}
This ensures observability data flows to your control plane.
Never hardcode secrets. Use AWS Secrets Manager or SSM Parameter Store with parameterized names in your AWS origin stack:
# AWS Secrets Manager (recommended)
resource "aws_secretsmanager_secret" "api_key" {
  name = "${var.instance_id}/prod/api/key"

  tags = {
    "instance-id" = var.instance_id
  }
}

# Or AWS Systems Manager Parameter Store
resource "aws_ssm_parameter" "db_password" {
  name  = "/${var.instance_id}/prod/db/password"
  type  = "SecureString"
  value = var.db_password

  tags = {
    "instance-id" = var.instance_id
  }
}
Pass secrets to your application as environment variables. Runtime SDK calls to fetch secrets are not automatically mapped by Tensor9.

Troubleshooting

Symptom: Terraform apply fails with “Permission denied” or “403 Forbidden” errors.Solutions:
  • Verify your control plane has successfully impersonated the Deploy service account
  • Check the Deploy service account’s IAM roles include necessary permissions for the resources being created
  • Ensure the impersonation policy allows your control plane’s service account
  • Verify the deploy_access label is set and the time window hasn’t expired
  • Review Cloud Audit Logs in the customer project to see which specific API call was denied
Symptom: “Resource already exists” or “Name is already in use” errors during deployment.Solutions:
  • Ensure all resource names include ${var.instance_id}
  • Verify the instance_id variable is being passed correctly
  • Check that no hardcoded resource names exist in your origin stack
  • For Cloud Storage buckets, remember they must be globally unique - include both app name and instance_id
Symptom: Cloud Logging and Monitoring data aren’t appearing in your observability sink.Solutions:
  • Verify the Steady-state service account has permissions to read logs and metrics
  • Check that all resources are labeled with instance-id
  • Ensure log sinks are configured correctly
  • Verify resource names are parameterized and follow the expected pattern
  • Check that the control plane is successfully impersonating the Steady-state service account
Symptom: “Quota exceeded” errors when creating resources.Solutions:
  • Ask the customer to request quota increases from Google Cloud Console
  • Consider deploying appliances in separate Google Cloud regions
  • Ask the customer to review their current quota usage
  • Ask the customer to clean up unused resources in their project
Symptom: Cloud SQL instance creation fails with VPC peering errors.Solutions:
  • Ensure VPC has a private service connection allocated
  • Verify the IP address range doesn’t conflict with existing ranges
  • Check that servicenetworking.googleapis.com API is enabled
  • Ensure the Deploy service account has compute.networks.updatePolicy permission
If you’re experiencing issues not covered here or need additional assistance with Google Cloud deployments, we’re here to help:
  • Slack: Join our community Slack workspace for real-time support
  • Email: Contact us at [email protected]
Our team can help with deployment troubleshooting, service account configuration, service equivalents, and best practices for Google Cloud environments.

Next steps

Now that you understand deploying to Google Cloud customer environments, explore these related topics: