Skip to main content
DigitalOcean is a fully supported deployment platform for Tensor9 appliances. Deploying to DigitalOcean customer environments provides access to managed Kubernetes (DOKS), managed databases, and object storage with a simplified infrastructure model ideal for mid-market customers.

Overview

When you deploy an application to DigitalOcean customer environments using Tensor9:
  • Customer appliances run entirely within the customer’s DigitalOcean account
  • Your control plane orchestrates deployments from your dedicated Tensor9 AWS account
  • API tokens enable your control plane to manage customer appliances with customer-approved permissions
  • Service equivalents compile your origin stack into DigitalOcean-native resources
DigitalOcean appliances leverage DigitalOcean Kubernetes (DOKS) as the primary compute platform, with managed databases, Spaces object storage, and load balancers providing enterprise-grade infrastructure with simplified configuration.

Prerequisites

Before deploying appliances to DigitalOcean customer environments, ensure:

Your control plane

  • Dedicated AWS account for your Tensor9 control plane
  • Control plane installed - See Installing Tensor9
  • Origin stack published - Your application infrastructure defined and uploaded

Customer DigitalOcean account

Your customers must provide:
  • DigitalOcean account where the appliance will be deployed
  • API tokens configured for the four-phase permissions model (Install, Steady-state, Deploy, Operate)
  • Sufficient resource quotas for your application’s needs (Droplets, volumes, load balancers)
  • DigitalOcean region where they want the appliance deployed

Your development environment

  • doctl CLI installed and configured
  • kubectl for Kubernetes operations
  • Terraform or OpenTofu (if using Terraform origin stacks)
  • Docker (if deploying container-based applications)

How DigitalOcean appliances work

DigitalOcean appliances are deployed on DigitalOcean Kubernetes (DOKS) with managed services orchestrated by your Tensor9 control plane.
1

Customer provisions API tokens

Your customer creates four API tokens in their DigitalOcean account, each corresponding to a permission phase: Install, Steady-state, Deploy, and Operate. These tokens define what the Tensor9 controller in the appliance can do within their environment.The customer configures token scopes and expiration times to control when and how long each permission phase is active.
2

You create a release for the customer appliance

You create a release targeting the customer’s appliance:
tensor9 stack release create \
-appName my-app \
-customerName acme-corp \
-vendorVersion "1.0.0" \
-description "Initial production deployment"
Your control plane compiles your origin stack into a deployment stack tailored for DigitalOcean, compiling any non-DigitalOcean resources to their DigitalOcean service equivalents. The deployment stack downloads to your local environment.
3

Customer grants deploy access

The customer approves the deployment by providing or activating the Deploy API token. This can be manual (sharing the token) or automated (scheduled maintenance windows).Once approved, the Tensor9 controller in the appliance can use the Deploy token to create resources in the customer’s account.
4

You deploy the release

You run the deployment locally against the downloaded deployment stack:
cd acme-corp-production
tofu init
tofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller inside the customer’s appliance. The controller uses the Deploy API token and creates all infrastructure resources in the customer’s DigitalOcean account:
  • DOKS cluster and node pools
  • Managed databases (PostgreSQL, MySQL, MongoDB, Redis)
  • Spaces buckets for object storage
  • Load balancers
  • DNS records
  • Any other DigitalOcean resources defined in your origin stack
5

Steady-state observability begins

After deployment, your control plane uses the Steady-state token to continuously collect observability data (logs, metrics) from the customer’s appliance without requiring additional approvals.This data flows to your observability sink, giving you visibility into appliance health and performance.

Service equivalents

When you deploy an origin stack to DigitalOcean customer environments, Tensor9 automatically compiles resources from other cloud providers to their DigitalOcean equivalents.

How service equivalents work

When compiling a deployment stack for DigitalOcean:
  1. DigitalOcean-native resources are preserved - If your origin stack already uses DigitalOcean resources (DOKS, Managed PostgreSQL, Spaces), they remain unchanged
  2. AWS resources are compiled - AWS resources are converted to their DigitalOcean equivalents
  3. Kubernetes resources are deployed - Most compute workloads run on DOKS (DigitalOcean Kubernetes)
  4. Configuration is adjusted - Resource configurations are modified to match DigitalOcean conventions

Common service equivalents

Service CategoryAWSDigitalOcean Equivalent
ContainersEKSDOKS (DigitalOcean Kubernetes)
ECS FargateDOKS with containerized workloads
FunctionsLambdaDigitalOcean Functions, Knative on DOKS
StorageS3Spaces (S3-compatible)
EBSBlock Storage (volumes)
DatabaseRDS PostgreSQLManaged PostgreSQL
RDS Aurora MySQL, RDS MySQLManaged MySQL
DocumentDBManaged MongoDB
ElastiCache RedisManaged Redis
NetworkingVPCVPC (DigitalOcean VPC)
ALB/NLBLoad Balancer
Route 53DigitalOcean DNS
Some popular AWS services (EC2, DynamoDB, EFS) are not currently supported. See Unsupported AWS services for the full list and recommended alternatives.

Example: Compiling an AWS origin stack

If your origin stack defines a Lambda function:
# Origin stack (AWS)
resource "aws_lambda_function" "api" {
  function_name = "myapp-api-${var.instance_id}"
  handler       = "index.handler"
  runtime       = "nodejs18.x"
  role          = aws_iam_role.api_role.arn

  environment {
    variables = {
      INSTANCE_ID = var.instance_id
    }
  }
}
Tensor9 compiles it to a DigitalOcean Function:
# Deployment stack (DigitalOcean)
resource "digitalocean_function" "api" {
  name      = "myapp-api-${var.instance_id}"
  namespace = var.namespace

  limits {
    timeout = 30
    memory  = 256
  }

  env = {
    INSTANCE_ID = var.instance_id
  }
}

Permissions model

DigitalOcean appliances use a four-phase API token permissions model that balances operational capability with customer control.

The four permission phases

PhaseAPI Token ScopePurposeAccess Pattern
InstallRead/Write (all resources)Initial setup, major infrastructure changesCustomer-approved, rare
Steady-stateRead-only (observability)Continuous observability collectionActive by default
DeployRead/Write (scoped to appliance)Deployments, updates, configuration changesCustomer-approved, time-bounded
OperateRead/Write (scoped operations)Remote operations, troubleshooting, debuggingCustomer-approved, time-bounded

API token structure

Each token is created in the customer’s DigitalOcean account with specific scopes and expiration times. Example: Deploy token configuration
# Customer creates a Deploy token with scoped permissions
doctl auth init --context tensor9-deploy

# Token scopes:
# - kubernetes:read
# - kubernetes:write
# - database:read
# - database:write
# - spaces:read
# - spaces:write
# - load_balancer:read
# - load_balancer:write

# Token expiration: 30 days
The Tensor9 controller can only use the Deploy token when:
  • The token is provided to the controller
  • The token has not expired
  • The customer has not revoked it
Example: Steady-state token (read-only observability)
# Customer creates a Steady-state token with read-only permissions
doctl auth init --context tensor9-steadystate

# Token scopes:
# - kubernetes:read
# - database:read
# - monitoring:read

# Token expiration: Never (or long-lived)
The Steady-state token:
  • Can read observability data from resources tagged with the appliance’s instance-id
  • Cannot modify, delete, or create any resources
  • Allows continuous monitoring without customer intervention

Deployment workflow with API tokens

1

Customer grants deploy access

Customer approves a deployment by providing the Deploy API token to the Tensor9 controller. This can be done via the Tensor9 UI, CLI, or automated workflows.
2

You execute deployment locally

You run the deployment locally against the downloaded deployment stack:
cd acme-corp-production
tofu init
tofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller in the appliance.
3

Controller uses Deploy token and creates resources

For each resource Terraform attempts to create, the Tensor9 controller inside the appliance uses the Deploy API token and creates the resource in the customer’s account.All infrastructure changes occur within the customer’s account using their Deploy token permissions.
4

Deploy access expires

After the token expires or is revoked, the Deploy token can no longer be used. Your control plane automatically reverts to using only the Steady-state token for observability.
See Permissions Model for detailed information on all four phases.

Networking

DigitalOcean appliances use DigitalOcean VPC for network isolation and DigitalOcean Kubernetes (DOKS) for compute.

Tensor9 controller on DOKS

When an appliance is deployed, Tensor9 creates a dedicated DOKS cluster containing the Tensor9 controller. The controller:
  • Communicates outbound to your Tensor9 control plane over HTTPS
  • Manages appliance resources using the customer’s API tokens
  • Forwards observability data to your observability sink
  • Does not accept inbound connections - all communication is outbound-only
The Tensor9 controller in your customer’s appliance is designed to only make outbound connections and not require ingress ports to be opened in your customer’s network perimeter:
# Example: Controller DOKS cluster (managed by Tensor9)
resource "digitalocean_kubernetes_cluster" "tensor9_controller" {
  name    = "tensor9-controller-${var.instance_id}"
  region  = var.region
  version = "1.28.2-do.0"

  node_pool {
    name       = "controller-pool"
    size       = "s-1vcpu-2gb"
    node_count = 2
  }

  vpc_uuid = digitalocean_vpc.tensor9_controller.id

  tags = ["tensor9", "controller", "instance-id:${var.instance_id}"]
}

# VPC for controller isolation
resource "digitalocean_vpc" "tensor9_controller" {
  name     = "tensor9-controller-${var.instance_id}"
  region   = var.region
  ip_range = "10.0.0.0/24"
}

# No inbound firewall rules required - controller only connects outbound

Application infrastructure

Your application resources run on their own DOKS cluster or use managed services, completely separate from the Tensor9 controller infrastructure. The application infrastructure is defined entirely by your origin stack. Example: Application DOKS cluster with load balancer
# Application DOKS cluster (compiled from your AWS origin stack)
resource "digitalocean_kubernetes_cluster" "application" {
  name    = "myapp-cluster-${var.instance_id}"
  region  = var.region
  version = "1.28.2-do.0"

  node_pool {
    name       = "app-pool"
    size       = "s-2vcpu-4gb"
    auto_scale = true
    min_nodes  = 2
    max_nodes  = 10
  }

  vpc_uuid = digitalocean_vpc.application.id

  tags = ["myapp", "instance-id:${var.instance_id}"]
}

# VPC for application
resource "digitalocean_vpc" "application" {
  name     = "myapp-vpc-${var.instance_id}"
  region   = var.region
  ip_range = "10.1.0.0/16"
}

# Load Balancer
resource "digitalocean_loadbalancer" "application" {
  name   = "myapp-lb-${var.instance_id}"
  region = var.region

  forwarding_rule {
    entry_port     = 443
    entry_protocol = "https"

    target_port     = 8080
    target_protocol = "http"

    certificate_id = digitalocean_certificate.app.id
  }

  healthcheck {
    port     = 8080
    protocol = "http"
    path     = "/health"
  }

  droplet_tag = "myapp-backend-${var.instance_id}"
}

Resource naming and tagging

All DigitalOcean resources should use the instance_id variable to ensure uniqueness across multiple customer appliances.

Parameterization pattern

variable "instance_id" {
  type        = string
  description = "Uniquely identifies the instance to deploy into"
}

# Spaces buckets
resource "digitalocean_spaces_bucket" "data" {
  name   = "myapp-data-${var.instance_id}"
  region = var.region
}

# Managed databases
resource "digitalocean_database_cluster" "postgres" {
  name   = "myapp-db-${var.instance_id}"
  engine = "pg"
  size   = "db-s-1vcpu-1gb"
  region = var.region
}

# DOKS clusters
resource "digitalocean_kubernetes_cluster" "app" {
  name   = "myapp-cluster-${var.instance_id}"
  region = var.region
}

# Load balancers
resource "digitalocean_loadbalancer" "app" {
  name   = "myapp-lb-${var.instance_id}"
  region = var.region
}

Required tags

DigitalOcean uses string-based tags (not key-value pairs like AWS/GCP) for most resources. Tag all resources with instance-id to enable observability and resource discovery: For most resources (Droplets, DOKS, Databases, Load Balancers, Volumes):
resource "digitalocean_kubernetes_cluster" "app" {
  name   = "myapp-cluster-${var.instance_id}"
  region = var.region

  tags = [
    "instance-id:${var.instance_id}",
    "application:my-app",
    "managed-by:tensor9"
  ]
}

resource "digitalocean_database_cluster" "postgres" {
  name   = "myapp-db-${var.instance_id}"
  engine = "pg"
  region = var.region

  tags = [
    "instance-id:${var.instance_id}",
    "myapp"
  ]
}
For Spaces buckets (uses key-value tags):
resource "digitalocean_spaces_bucket" "data" {
  name   = "myapp-data-${var.instance_id}"
  region = var.region

  tags = {
    instance-id = var.instance_id
    application = "my-app"
    managed-by  = "tensor9"
  }
}
DigitalOcean tags are simple strings (e.g., "instance-id:000000007e") for most resources, unlike AWS/Google Cloud which use key-value pairs. Spaces buckets are an exception and support key-value tags. When filtering or querying resources, use the full string tag format.
The instance-id tag:
  • Allows filtering of observability data by appliance
  • Helps customers track costs per appliance
  • Facilitates resource discovery by Tensor9 controllers

Observability

DigitalOcean appliances provide observability through DigitalOcean Monitoring, DOKS logging, and integration with your observability sink.

DigitalOcean Monitoring

Infrastructure metrics are automatically collected for:
  • Droplets: CPU utilization, memory usage, disk I/O, network traffic
  • DOKS: Node CPU/memory, pod counts, cluster health
  • Managed Databases: Connections, queries per second, replication lag
  • Load Balancers: Request counts, response times, connection counts
  • Spaces: Storage used, request counts
Your control plane uses the Steady-state token to fetch metrics:
doctl monitoring metrics droplet \
  --tag-name instance-id:000000007e \
  --start $(date -u -d '1 hour ago' +%s) \
  --end $(date -u +%s)
Metrics are forwarded to your observability sink for centralized monitoring.

DOKS logging

Application logs from containers running on DOKS are collected and forwarded:
# Example: Application deployment with logging
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-api
  labels:
    app: myapp
    instance-id: "000000007e"
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
        instance-id: "000000007e"
    spec:
      containers:
      - name: api
        image: myapp/api:1.0.0
        env:
        - name: INSTANCE_ID
          value: "000000007e"
Logs are accessible through kubectl and forwarded to your observability sink:
# View logs via kubectl
kubectl logs -l app=myapp,instance-id=000000007e --tail=100

Database query logs

Managed database query logs can be enabled and forwarded:
resource "digitalocean_database_cluster" "postgres" {
  name   = "myapp-db-${var.instance_id}"
  engine = "pg"
  size   = "db-s-1vcpu-1gb"
  region = var.region

  # Enable connection pooling for better observability
  connection_pool {
    name    = "myapp-pool"
    mode    = "transaction"
    size    = 25
    db_name = "myapp"
    user    = "myapp"
  }
}

Artifacts

DigitalOcean appliances automatically provision private container registries to store container images deployed by your deployment stacks.

Container images (DigitalOcean Container Registry)

When you deploy an appliance, Tensor9 automatically provisions a private container registry in the customer’s DigitalOcean account. Example: Origin stack with DOKS deployment Your origin stack references container images from your vendor registry:
# Kubernetes deployment in your origin stack
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-api-${var.instance_id}
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp-api
  template:
    metadata:
      labels:
        app: myapp-api
        instance-id: ${var.instance_id}
    spec:
      containers:
      - name: api
        # Reference to your vendor registry
        image: registry.digitalocean.com/vendor-registry/myapp-api:1.0.0
        ports:
        - containerPort: 8080
        env:
        - name: INSTANCE_ID
          value: ${var.instance_id}
Container copy during deployment When you deploy the deployment stack, Tensor9 automatically:
  1. Detects the container image reference in your Kubernetes manifests
  2. Provisions a private container registry in the appliance
  3. Copies the container image from your vendor registry to the appliance’s private registry
  4. Rewrites the deployment stack to reference the appliance-local registry
The compiled deployment stack will contain:
spec:
  containers:
  - name: api
    # Rewritten to reference appliance's private registry
    image: registry.digitalocean.com/customer-registry-000000007e/myapp-api:1.0.0
This ensures the container image is stored locally in the customer’s account and the application doesn’t depend on cross-account access to your vendor registry. Artifact lifecycle Container artifacts are tied to the deployment stack lifecycle:
  • Deploy (tofu apply): Tensor9 copies the container image from your vendor registry to the appliance’s private registry
  • Destroy (tofu destroy): Deleting the deployment stack also deletes the copied container artifact from the appliance’s private registry
See Artifacts for comprehensive documentation on artifact management.

Secrets management

Store secrets in AWS Secrets Manager or AWS Systems Manager Parameter Store in your AWS origin stack, then pass them to your application as environment variables.

Secret naming and injection

Always use parameterized secret names and inject them as environment variables:
# AWS Secrets Manager secret
resource "aws_secretsmanager_secret" "db_password" {
  name = "${var.instance_id}/prod/db/password"

  tags = {
    "instance-id" = var.instance_id
  }
}

resource "aws_secretsmanager_secret_version" "db_password" {
  secret_id     = aws_secretsmanager_secret.db_password.id
  secret_string = var.db_password
}

# ECS Fargate task - inject secret as environment variable
resource "aws_ecs_task_definition" "app" {
  family = "myapp-${var.instance_id}"

  container_definitions = jsonencode([
    {
      name  = "app"
      image = "myapp:latest"

      # Inject secret as environment variable
      secrets = [
        {
          name      = "DB_PASSWORD"
          valueFrom = aws_secretsmanager_secret.db_password.arn
        }
      ]
    }
  ])

  tags = {
    "instance-id" = var.instance_id
  }
}
Your application reads secrets from environment variables:
import os

# Read secret from environment variable
db_password = os.environ['DB_PASSWORD']
If your application dynamically fetches secrets using AWS SDK calls (e.g., boto3.client('secretsmanager').get_secret_value()), those calls will NOT be automatically mapped by Tensor9. Always pass secrets as environment variables.
See Secrets for detailed secret management patterns.

Operations

Perform remote operations on DigitalOcean appliances using the Operate token.

kubectl on DOKS

Execute kubectl commands against DOKS clusters:
tensor9 ops kubectl \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "digitalocean_kubernetes_cluster.app" \
  -command "kubectl get pods -n myapp"
Output:
NAME                     READY   STATUS    RESTARTS   AGE
api-7d9f8b5c6d-9k2lm    1/1     Running   0          2h
worker-5c8d7b4f3-8h4km  1/1     Running   0          2h

doctl CLI operations

Execute doctl commands:
# List Spaces buckets
tensor9 ops doctl \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "digitalocean_spaces_bucket.data" \
  -command "doctl compute space list"

# View database status
tensor9 ops doctl \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "digitalocean_database_cluster.postgres" \
  -command "doctl databases get myapp-db-000000007e"

Database queries

Execute SQL queries against managed databases:
tensor9 ops db \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "digitalocean_database_cluster.postgres" \
  -command "SELECT count(*) FROM users WHERE created_at > NOW() - INTERVAL '24 hours'"
See Operations for comprehensive operations documentation.

Example: Complete DigitalOcean appliance

Here’s a complete example of a deployment stack for a DigitalOcean appliance, compiled from an AWS origin stack:

main.tf

# DOKS Cluster
resource "digitalocean_kubernetes_cluster" "main" {
  name    = "myapp-cluster-${var.instance_id}"
  region  = var.region
  version = "1.28.2-do.0"

  node_pool {
    name       = "app-pool"
    size       = "s-2vcpu-4gb"
    auto_scale = true
    min_nodes  = 2
    max_nodes  = 10

    tags = ["myapp", "instance-id:${var.instance_id}"]
  }

  vpc_uuid = digitalocean_vpc.main.id

  tags = ["myapp", "instance-id:${var.instance_id}"]
}

# VPC
resource "digitalocean_vpc" "main" {
  name     = "myapp-vpc-${var.instance_id}"
  region   = var.region
  ip_range = "10.0.0.0/16"
}

# Managed PostgreSQL
resource "digitalocean_database_cluster" "postgres" {
  name       = "myapp-db-${var.instance_id}"
  engine     = "pg"
  version    = "15"
  size       = "db-s-2vcpu-4gb"
  region     = var.region
  node_count = 2

  tags = ["myapp", "instance-id:${var.instance_id}"]
}

resource "digitalocean_database_db" "main" {
  cluster_id = digitalocean_database_cluster.postgres.id
  name       = "myapp"
}

resource "digitalocean_database_user" "main" {
  cluster_id = digitalocean_database_cluster.postgres.id
  name       = "myapp"
}

# Spaces bucket
resource "digitalocean_spaces_bucket" "data" {
  name   = "myapp-data-${var.instance_id}"
  region = var.region

  tags = {
    instance-id = var.instance_id
    application = "my-app"
    managed-by  = "tensor9"
  }
}

# Managed Redis
resource "digitalocean_database_cluster" "redis" {
  name       = "myapp-redis-${var.instance_id}"
  engine     = "redis"
  version    = "7"
  size       = "db-s-1vcpu-1gb"
  region     = var.region
  node_count = 1

  tags = ["myapp", "instance-id:${var.instance_id}"]
}

# Load Balancer
resource "digitalocean_loadbalancer" "app" {
  name   = "myapp-lb-${var.instance_id}"
  region = var.region

  forwarding_rule {
    entry_port      = 443
    entry_protocol  = "https"
    target_port     = 8080
    target_protocol = "http"
    certificate_id  = digitalocean_certificate.app.id
  }

  healthcheck {
    port     = 8080
    protocol = "http"
    path     = "/health"
  }

  droplet_tag = "myapp-backend-${var.instance_id}"
}

# DNS
resource "digitalocean_record" "app" {
  domain = var.domain
  type   = "A"
  name   = "myapp-${var.instance_id}"
  value  = digitalocean_loadbalancer.app.ip
  ttl    = 300
}

variables.tf

variable "instance_id" {
  type        = string
  description = "Uniquely identifies the instance to deploy into"
}

variable "region" {
  type        = string
  description = "DigitalOcean region"
  default     = "nyc3"
}

variable "domain" {
  type        = string
  description = "DNS domain for application"
}

outputs.tf

output "kubernetes_endpoint" {
  description = "DOKS cluster endpoint"
  value       = digitalocean_kubernetes_cluster.main.endpoint
  sensitive   = true
}

output "database_host" {
  description = "PostgreSQL database host"
  value       = digitalocean_database_cluster.postgres.host
  sensitive   = true
}

output "database_port" {
  description = "PostgreSQL database port"
  value       = digitalocean_database_cluster.postgres.port
}

output "redis_host" {
  description = "Redis cache host"
  value       = digitalocean_database_cluster.redis.host
  sensitive   = true
}

output "spaces_bucket" {
  description = "Spaces bucket name"
  value       = digitalocean_spaces_bucket.data.name
}

output "app_url" {
  description = "Application URL"
  value       = "https://${digitalocean_record.app.fqdn}"
}

Best practices

Every DigitalOcean resource with a name should include ${var.instance_id} to prevent conflicts across customer appliances:
# ✓ CORRECT
resource "digitalocean_spaces_bucket" "data" {
name = "myapp-data-${var.instance_id}"
}

resource "digitalocean_kubernetes_cluster" "app" {
name = "myapp-cluster-${var.instance_id}"
}

# ✗ INCORRECT - Will cause collisions
resource "digitalocean_spaces_bucket" "data" {
name = "myapp-data"
}
Apply the instance-id tag to every resource. DigitalOcean uses string tags for most resources:
# For most resources (DOKS, Databases, Load Balancers, Droplets, Volumes)
tags = ["instance-id:${var.instance_id}", "myapp"]

# For Spaces buckets (key-value tags)
tags = {
instance-id = var.instance_id
application = "my-app"
}
This enables:
  • Observability data filtering
  • Cost tracking
  • Resource discovery
For compute workloads in your AWS origin stack, prefer managed container and serverless services over EC2 instances. These compile cleanly to DigitalOcean Kubernetes (DOKS) and Functions:
# ✓ CORRECT: EKS compiles to DOKS
resource "aws_eks_cluster" "app" {
  name     = "myapp-cluster-${var.instance_id}"
  role_arn = aws_iam_role.cluster.arn

  vpc_config {
    subnet_ids = aws_subnet.private[*].id
  }
}

# ✓ CORRECT: ECS Fargate compiles to DOKS
resource "aws_ecs_cluster" "app" {
  name = "myapp-cluster-${var.instance_id}"
}

resource "aws_ecs_service" "app" {
  name            = "myapp-${var.instance_id}"
  cluster         = aws_ecs_cluster.app.id
  launch_type     = "FARGATE"
  desired_count   = 2
}

# ✓ CORRECT: Lambda compiles to DigitalOcean Functions or Knative
resource "aws_lambda_function" "api" {
  function_name = "myapp-api-${var.instance_id}"
  handler       = "index.handler"
  runtime       = "nodejs18.x"
}
These AWS resources automatically compile to appropriate DigitalOcean equivalents (DOKS, Functions) when deployed to DigitalOcean customer environments.
Never hardcode secrets. Use AWS Secrets Manager or SSM Parameter Store with parameterized names in your AWS origin stack:
# AWS Secrets Manager (recommended)
resource "aws_secretsmanager_secret" "db_creds" {
  name = "${var.instance_id}/prod/db/credentials"

  tags = {
    "instance-id" = var.instance_id
  }
}

# Or AWS Systems Manager Parameter Store
resource "aws_ssm_parameter" "api_key" {
  name  = "/${var.instance_id}/prod/api/key"
  type  = "SecureString"
  value = var.api_key

  tags = {
    "instance-id" = var.instance_id
  }
}
Pass secrets to your application as environment variables. Runtime SDK calls to fetch secrets are not automatically mapped by Tensor9.

Troubleshooting

Symptom: Terraform apply fails with “unauthorized” or “forbidden” errors.Solutions:
  • Verify the Tensor9 controller has the correct Deploy API token
  • Check that the token has the necessary scopes (kubernetes:write, database:write, etc.)
  • Ensure the token has not expired
  • Verify the token has not been revoked
  • Check DigitalOcean account is not suspended or has billing issues
Symptom: “Name already in use” or “Resource already exists” errors.Solutions:
  • Ensure all resource names include ${var.instance_id}
  • Verify the instance_id variable is being passed correctly
  • Check that no hardcoded resource names exist in your origin stack
  • For Spaces buckets, remember they must be regionally unique
Symptom: Metrics and logs aren’t appearing in your observability sink.Solutions:
  • Verify the Steady-state token has monitoring:read scope
  • Check that all resources are tagged with instance-id
  • Ensure DOKS logging is enabled
  • Verify the control plane can authenticate with the Steady-state token
  • Check network connectivity between appliance and control plane
Symptom: “Quota exceeded” or “Droplet limit reached” errors.Solutions:
  • Ask the customer to request quota increases from DigitalOcean support
  • Consider using smaller droplet sizes
  • Review and clean up unused resources in the customer’s account
  • Deploy across multiple DigitalOcean regions
Symptom: Kubernetes cluster creation times out or fails.Solutions:
  • Verify the region supports DOKS
  • Check that the Kubernetes version is supported
  • Ensure the node pool size is available in the region
  • Verify VPC configuration is correct
  • Check DigitalOcean status page for service incidents
If you’re experiencing issues not covered here or need additional assistance with DigitalOcean deployments, we’re here to help:
  • Slack: Join our community Slack workspace for real-time support
  • Email: Contact us at [email protected]
Our team can help with deployment troubleshooting, API token configuration, service equivalents, and best practices for DigitalOcean environments.

Next steps

Now that you understand deploying to DigitalOcean customer environments, explore these related topics: