Skip to main content
Amazon Web Services (AWS) is Tensor9’s primary deployment platform and the reference implementation for all service equivalents. Deploying to AWS provides the most comprehensive feature set and serves as the baseline from which other form factors are derived.

Overview

When you deploy an application to AWS customer environments using Tensor9:
  • Customer appliances run entirely within the customer’s AWS account
  • Your control plane orchestrates deployments from your dedicated Tensor9 AWS account
  • Cross-account IAM roles enable your control plane to manage customer appliances with customer-approved permissions
  • Service equivalents compile your origin stack into AWS-native resources (or preserve them if already AWS-based)
AWS appliances leverage AWS-native services for compute, storage, networking, and observability, providing enterprise-grade infrastructure that integrates seamlessly with your customers’ existing AWS environments.

Prerequisites

Before deploying appliances to AWS customer environments, ensure:

Your Control Plane

  • Dedicated AWS account for your Tensor9 control plane
  • Control plane installed - See Installing Tensor9
  • Origin stack published - Your application infrastructure defined and uploaded

Customer AWS Account

Your customers must provide:
  • AWS account where the appliance will be deployed
  • IAM roles configured for the four-phase permissions model (Install, Steady-state, Deploy, Operate)
  • VPC and networking configured according to their requirements
  • Sufficient service quotas for your application’s resource needs
  • AWS region where they want the appliance deployed

Your Development Environment

  • AWS CLI installed and configured
  • Terraform or OpenTofu (if using Terraform origin stacks)
  • AWS CloudFormation CLI (if using CloudFormation origin stacks)
  • Docker (if deploying container-based applications)

How AWS appliances work

AWS appliances are deployed using AWS-native services orchestrated by your Tensor9 control plane.
1

Customer provisions IAM roles

Your customer creates four IAM roles in their AWS account, each corresponding to a permission phase: Install, Steady-state, Deploy, and Operate. These roles define what the Tensor9 controller in the appliance can do within their environment.The customer configures trust policies that allow the Tensor9 controller to assume these roles with appropriate conditions (time windows, approval tags, etc.).
2

You create a release for the customer appliance

You create a release targeting the customer’s appliance:
tensor9 stack release create \
  -appName my-app \
  -customerName acme-corp \
  -vendorVersion "1.0.0" \
  -description "Initial production deployment"
Your control plane compiles your origin stack into a deployment stack tailored for AWS, compiling any non-AWS resources to their AWS service equivalents. The deployment stack downloads to your local environment.
3

Customer grants deploy access

The customer approves the deployment by granting temporary deploy access. This can be manual (updating IAM policy conditions) or automated (scheduled maintenance windows).Once approved, the Tensor9 controller in the appliance can assume the Deploy role in the customer’s account.
4

You deploy the release

You run the deployment locally against the downloaded deployment stack:
cd acme-corp-production
tofu init
tofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller inside the customer’s appliance. The controller assumes the Deploy role and creates all infrastructure resources in the customer’s AWS account:
  • VPCs, subnets, security groups
  • EKS clusters, Lambda functions
  • RDS databases, S3 buckets, ElastiCache clusters
  • CloudWatch log groups, IAM roles, Route 53 records
  • Any other AWS resources defined in your origin stack
5

Steady-state observability begins

After deployment, your control plane uses the Steady-state role to continuously collect observability data (logs, metrics, traces) from the customer’s appliance without requiring additional approvals.This data flows to your observability sink, giving you visibility into appliance health and performance.

Permissions model

AWS appliances use a four-phase IAM permissions model that balances operational capability with customer control.

The four permission phases

PhaseIAM RolePurposeAccess Pattern
InstallInstallRoleInitial setup, major infrastructure changesCustomer-approved, rare
Steady-stateSteadyStateRoleContinuous observability collection (read-only)Active by default
DeployDeployRoleDeployments, updates, configuration changesCustomer-approved, time-bounded
OperateOperateRoleRemote operations, troubleshooting, debuggingCustomer-approved, time-bounded

IAM role structure

Each role is created in the customer’s AWS account with a trust policy that allows the Tensor9 controller in the appliance to assume it. Example: Deploy role with conditional access
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::VENDOR_ACCOUNT:role/ControlPlane"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/DeployAccess": "enabled"
        },
        "DateLessThan": {
          "aws:CurrentTime": "2024-12-31T23:59:59Z"
        }
      }
    }
  ]
}
The Tensor9 controller can only assume the Deploy role when:
  • The DeployAccess tag is set to “enabled”
  • The current time is within the allowed window
Customers control when and how long deploy access is granted. Example: Steady-state role (read-only observability)
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:DescribeLogGroups",
        "logs:DescribeLogStreams",
        "logs:GetLogEvents",
        "cloudwatch:GetMetricData",
        "cloudwatch:ListMetrics",
        "ec2:Describe*",
        "rds:Describe*",
        "eks:Describe*",
        "s3:ListBucket",
        "s3:GetObject"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/instance-id": "${var.instance_id}"
        }
      }
    },
    {
      "Effect": "Deny",
      "Action": [
        "iam:*",
        "*:Delete*",
        "*:Terminate*",
        "*:Update*",
        "*:Modify*"
      ],
      "Resource": "*"
    }
  ]
}
The Steady-state role:
  • Can read observability data from resources tagged with the appliance’s instance-id
  • Cannot modify, delete, or terminate any resources
  • Cannot change IAM policies

Deployment workflow with IAM

1

Customer grants deploy access

Customer approves a deployment by setting the DeployAccess tag to “enabled” and defining a time window. This can be done manually or through automated approval workflows.
2

You execute deployment locally

You run the deployment locally against the downloaded deployment stack:
cd acme-corp-production
tofu init
tofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller in the appliance.
3

Controller assumes Deploy role and creates resources

For each resource Terraform attempts to create, the Tensor9 controller inside the appliance assumes the Deploy role and creates the resource in the customer’s account.All infrastructure changes occur within the customer’s account using their Deploy role permissions.
4

Deploy access expires

After the time window expires, the Deploy role can no longer be assumed. Your control plane automatically reverts to using only the Steady-state role for observability.
See Permissions Model for detailed information on all four phases.

Networking

AWS appliances use an isolated networking architecture with a Tensor9 controller that manages communication with your control plane.

Tensor9 controller VPC

When an appliance is deployed, Tensor9 creates an isolated VPC containing the Tensor9 controller. This VPC is configured with:
  • Internet Gateway: Provides outbound internet connectivity
  • Route to control plane: Establishes a secure channel to your Tensor9 control plane
  • No ingress ports: The controller VPC does not accept inbound connections - all communication is outbound-only
The Tensor9 controller uses this secure channel to:
  • Receive deployments: Deployment stacks are pushed from your control plane to the appliance
  • Configure observability pipeline: Set up log, metric, and trace forwarding to your observability sink
  • Receive operational commands: Execute remote operations initiated from your control plane

Outbound-only security model

The Tensor9 controller in your customer’s appliance is designed to only make outbound connections and not require ingress ports to be opened in your customer’s network perimeter:
# Example: Controller VPC configuration (managed by Tensor9)
resource "aws_vpc" "tensor9_controller" {
  cidr_block           = "10.0.0.0/24"
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name          = "tensor9-controller-${var.instance_id}"
    "instance-id" = var.instance_id
    ManagedBy     = "Tensor9"
  }
}

resource "aws_internet_gateway" "controller" {
  vpc_id = aws_vpc.tensor9_controller.id

  tags = {
    Name          = "tensor9-controller-igw-${var.instance_id}"
    "instance-id" = var.instance_id
    ManagedBy     = "Tensor9"
  }
}

# Security group: egress only, no ingress
resource "aws_security_group" "controller" {
  name        = "tensor9-controller-sg-${var.instance_id}"
  description = "Tensor9 controller security group - outbound only"
  vpc_id      = aws_vpc.tensor9_controller.id

  # No ingress rules - controller never accepts inbound connections

  egress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "HTTPS to control plane"
  }

  tags = {
    Name          = "tensor9-controller-sg-${var.instance_id}"
    "instance-id" = var.instance_id
    ManagedBy     = "Tensor9"
  }
}
This architecture ensures that the customer’s appliance cannot be compromised via inbound network attacks on the controller.

Application VPC topology

Your application resources run in their own VPC(s), completely separate from the Tensor9 controller VPC. The application VPC topology is defined entirely by your origin stack - whatever VPC resources you define in your origin stack will be deployed into the appliance. Example: Application VPC with internet-facing load balancer If your origin stack defines a VPC with public subnets, an internet gateway, and a load balancer, that exact topology will be created in the customer’s appliance:
# Application VPC (defined in your origin stack)
resource "aws_vpc" "application" {
  cidr_block           = "10.1.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name          = "myapp-vpc-${var.instance_id}"
    "instance-id" = var.instance_id
  }
}

# Internet gateway for application traffic
resource "aws_internet_gateway" "application" {
  vpc_id = aws_vpc.application.id

  tags = {
    Name          = "myapp-igw-${var.instance_id}"
    "instance-id" = var.instance_id
  }
}

# Public subnets for load balancer
resource "aws_subnet" "public" {
  count                   = 2
  vpc_id                  = aws_vpc.application.id
  cidr_block              = "10.1.${count.index}.0/24"
  availability_zone       = data.aws_availability_zones.available.names[count.index]
  map_public_ip_on_launch = true

  tags = {
    Name          = "myapp-public-${count.index}-${var.instance_id}"
    "instance-id" = var.instance_id
  }
}

# Private subnets for application servers
resource "aws_subnet" "private" {
  count             = 2
  vpc_id            = aws_vpc.application.id
  cidr_block        = "10.1.${count.index + 10}.0/24"
  availability_zone = data.aws_availability_zones.available.names[count.index]

  tags = {
    Name          = "myapp-private-${count.index}-${var.instance_id}"
    "instance-id" = var.instance_id
  }
}

# Application Load Balancer
resource "aws_lb" "application" {
  name               = "myapp-alb-${var.instance_id}"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.alb.id]
  subnets            = aws_subnet.public[*].id

  tags = {
    Name          = "myapp-alb-${var.instance_id}"
    "instance-id" = var.instance_id
  }
}

# Security group for load balancer
resource "aws_security_group" "alb" {
  name        = "myapp-alb-sg-${var.instance_id}"
  description = "Security group for application load balancer"
  vpc_id      = aws_vpc.application.id

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "HTTPS from internet"
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    description = "All outbound traffic"
  }

  tags = {
    Name          = "myapp-alb-sg-${var.instance_id}"
    "instance-id" = var.instance_id
  }
}
This application VPC topology is deployed alongside the Tensor9 controller VPC, but they remain completely separate. The controller VPC manages the control plane connection, while the application VPC handles your application’s traffic and resources.

Resource naming and tagging

All AWS resources should use the instance_id variable to ensure uniqueness across multiple customer appliances.

Parameterization pattern

variable "instance_id" {
  type        = string
  description = "Uniquely identifies the instance to deploy into"
}

# S3 buckets
resource "aws_s3_bucket" "data" {
  bucket = "myapp-data-${var.instance_id}"
}

# RDS databases
resource "aws_db_instance" "postgres" {
  identifier = "myapp-db-${var.instance_id}"
}

# Lambda functions
resource "aws_lambda_function" "api" {
  function_name = "myapp-api-${var.instance_id}"
}

# CloudWatch log groups
resource "aws_cloudwatch_log_group" "api_logs" {
  name = "/aws/lambda/myapp-api-${var.instance_id}"
}

# IAM roles
resource "aws_iam_role" "api_role" {
  name = "myapp-api-role-${var.instance_id}"
}

Required tags

Tag all resources with instance-id to enable permissions scoping and observability:
resource "aws_s3_bucket" "data" {
  bucket = "myapp-data-${var.instance_id}"

  tags = {
    "instance-id" = var.instance_id
    Application   = "my-app"
    ManagedBy     = "Tensor9"
  }
}
The instance-id tag:
  • Enables IAM condition keys to scope permissions to specific appliances
  • Allows CloudWatch filters to isolate telemetry by appliance
  • Helps customers track costs per appliance
  • Facilitates resource discovery by Tensor9 controllers

Observability

AWS appliances provide comprehensive observability through CloudWatch, X-Ray, and CloudTrail.

CloudWatch Logs

Application and infrastructure logs flow to CloudWatch Log Groups:
resource "aws_cloudwatch_log_group" "lambda_logs" {
  name              = "/aws/lambda/${aws_lambda_function.api.function_name}"
  retention_in_days = 14

  tags = {
    "instance-id" = var.instance_id
  }
}

resource "aws_cloudwatch_log_group" "eks_logs" {
  name              = "/aws/eks/${aws_eks_cluster.main.name}/cluster"
  retention_in_days = 7

  tags = {
    "instance-id" = var.instance_id
  }
}
Your control plane uses the Steady-state role to continuously fetch logs:
aws logs get-log-events \
  --log-group-name "/aws/lambda/myapp-api-000000007e" \
  --log-stream-name "2024/01/15/[\$LATEST]abcdef123456"
Logs are forwarded to your observability sink for centralized monitoring.

CloudWatch Metrics

Infrastructure metrics are automatically collected:
  • RDS: Database connections, query latency, storage usage
  • Lambda: Invocations, duration, errors, throttles
  • EKS: Node CPU/memory, pod counts, API server metrics
  • ALB: Request counts, latency, HTTP status codes
Custom application metrics can be published:
import boto3

cloudwatch = boto3.client('cloudwatch')

cloudwatch.put_metric_data(
    Namespace='MyApp',
    MetricData=[
        {
            'MetricName': 'OrdersProcessed',
            'Value': 42,
            'Unit': 'Count',
            'Dimensions': [
                {
                    'Name': 'InstanceId',
                    'Value': instance_id
                }
            ]
        }
    ]
)

AWS X-Ray

Enable distributed tracing for Lambda and containerized applications:
resource "aws_lambda_function" "api" {
  function_name = "myapp-api-${var.instance_id}"
  runtime       = "nodejs18.x"
  handler       = "index.handler"
  role          = aws_iam_role.lambda.arn

  tracing_config {
    mode = "Active"
  }

  environment {
    variables = {
      AWS_XRAY_TRACING_NAME = "myapp-api-${var.instance_id}"
    }
  }
}
X-Ray traces are accessible through the Steady-state role and forwarded to your observability sink.

CloudTrail auditing

All API calls within the customer’s AWS account are logged to CloudTrail, providing a complete audit trail of what your control plane does:
  • Role assumptions (when Deploy or Operate roles are assumed)
  • Resource creation, modification, deletion
  • Permission denials
  • Configuration changes
Customers have full visibility into your control plane’s actions through their CloudTrail logs.

Artifacts

AWS appliances automatically provision private artifact repositories to store container images and application files deployed by your deployment stacks.

Container images (Amazon ECR)

When you deploy an appliance, Tensor9 automatically provisions a private ECR repository in the customer’s AWS account to store your container images. Example: Origin stack with ECS service Your origin stack references container images from your vendor ECR repository:
# ECS Task Definition in your origin stack
resource "aws_ecs_task_definition" "app" {
  family                   = "myapp-${var.instance_id}"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = "256"
  memory                   = "512"

  container_definitions = jsonencode([
    {
      name  = "api"
      # Reference to your vendor ECR repository
      image = "123456789012.dkr.ecr.us-west-2.amazonaws.com/myapp-api:1.0.0"

      portMappings = [
        {
          containerPort = 8080
          protocol      = "tcp"
        }
      ]

      environment = [
        {
          name  = "INSTANCE_ID"
          value = var.instance_id
        }
      ]
    }
  ])

  tags = {
    "instance-id" = var.instance_id
  }
}

# ECS Service
resource "aws_ecs_service" "app" {
  name            = "myapp-service-${var.instance_id}"
  cluster         = aws_ecs_cluster.main.id
  task_definition = aws_ecs_task_definition.app.arn
  desired_count   = 2
  launch_type     = "FARGATE"

  network_configuration {
    subnets          = aws_subnet.private[*].id
    security_groups  = [aws_security_group.ecs_tasks.id]
    assign_public_ip = false
  }

  tags = {
    "instance-id" = var.instance_id
  }
}
Container copy during deployment When you deploy the deployment stack, Tensor9 automatically:
  1. Detects the container image reference in your ECS task definition
  2. Provisions a private ECR repository in the appliance (e.g., 987654321098.dkr.ecr.us-east-1.amazonaws.com/myapp-api-000000007e)
  3. Copies the container image from your vendor ECR (123456789012.dkr.ecr.us-west-2.amazonaws.com/myapp-api:1.0.0) to the appliance’s private ECR
  4. Rewrites the deployment stack to reference the appliance-local ECR repository
The compiled deployment stack will contain:
container_definitions = jsonencode([
  {
    name  = "api"
    # Rewritten to reference appliance's private ECR
    image = "987654321098.dkr.ecr.us-east-1.amazonaws.com/myapp-api-000000007e:1.0.0"
    # ... rest of configuration
  }
])
This ensures the container image is stored locally in the customer’s account and the application doesn’t depend on cross-account access to your vendor ECR. Artifact lifecycle Container artifacts are tied to the deployment stack lifecycle:
  • Deploy (tofu apply): Tensor9 copies the container image from your vendor ECR to the appliance’s private ECR
  • Destroy (tofu destroy): Deleting the deployment stack also deletes the copied container artifact from the appliance’s private ECR
This ensures that artifacts are cleaned up when deployments are removed, preventing orphaned resources.

Lambda deployment packages (S3)

For Lambda functions, Tensor9 supports copying Lambda deployment packages (zip files) from S3. This follows the same copy pattern as container images:
# Lambda function referencing S3 deployment package in your origin stack
resource "aws_lambda_function" "processor" {
  function_name = "myapp-processor-${var.instance_id}"
  role          = aws_iam_role.lambda.arn
  handler       = "index.handler"
  runtime       = "python3.11"

  # Reference to Lambda zip in your vendor S3 bucket
  s3_bucket = "my-vendor-lambda-artifacts"
  s3_key    = "functions/processor-v1.0.0.zip"

  environment {
    variables = {
      INSTANCE_ID = var.instance_id
    }
  }

  tags = {
    "instance-id" = var.instance_id
  }
}
During deployment, Tensor9:
  1. Provisions a private S3 bucket in the appliance for Lambda artifacts
  2. Copies the Lambda zip file from your vendor S3 bucket to the appliance’s S3 bucket
  3. Rewrites the Lambda function definition to reference the appliance-local S3 bucket
Like container images, destroying the deployment stack (tofu destroy) removes the copied Lambda deployment packages. See Artifacts for comprehensive documentation on artifact management, including immutability requirements and supported artifact types.

Secrets management

Store secrets in AWS Secrets Manager or AWS Systems Manager Parameter Store, then pass them to your application as environment variables.

Secret naming and injection

Always use parameterized secret names and inject them as environment variables:
# AWS Secrets Manager secret
resource "aws_secretsmanager_secret" "db_password" {
  name = "${var.instance_id}/prod/db/password"

  tags = {
    "instance-id" = var.instance_id
  }
}

resource "aws_secretsmanager_secret_version" "db_password" {
  secret_id     = aws_secretsmanager_secret.db_password.id
  secret_string = var.db_password
}

# ECS Fargate task - inject secret as environment variable
resource "aws_ecs_task_definition" "app" {
  family = "myapp-${var.instance_id}"

  container_definitions = jsonencode([
    {
      name  = "app"
      image = "myapp:latest"

      # Inject secret as environment variable
      secrets = [
        {
          name      = "DB_PASSWORD"
          valueFrom = aws_secretsmanager_secret.db_password.arn
        }
      ]
    }
  ])

  tags = {
    "instance-id" = var.instance_id
  }
}
Your application reads secrets from environment variables:
import os

# Read secret from environment variable
db_password = os.environ['DB_PASSWORD']
Pass secrets as environment variables rather than using runtime SDK calls. While boto3.client('secretsmanager').get_secret_value() works natively in AWS appliances, using environment variables ensures your application works consistently across all deployment targets (AWS, Google Cloud, DigitalOcean).
See Secrets for detailed secret management patterns.

Operations

Perform remote operations on AWS appliances using the Operate role.

kubectl on EKS

Execute kubectl commands against EKS clusters:
tensor9 ops kubectl \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "aws_eks_cluster.main_cluster" \
  -command "kubectl get pods -n my-app-namespace"
Output:
NAME                     READY   STATUS    RESTARTS   AGE
api-7d9f8b5c6d-9k2lm    1/1     Running   0          2h
worker-5c8d7b4f3-8h4km  1/1     Running   0          2h

AWS CLI operations

Execute AWS CLI commands:
# List S3 bucket contents
tensor9 ops aws \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "aws_s3_bucket.data" \
  -command "aws s3 ls s3://myapp-data-000000007e/"

# Invoke Lambda function
tensor9 ops aws \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "aws_lambda_function.api" \
  -command "aws lambda invoke --function-name myapp-api-000000007e output.json"

# View RDS status
tensor9 ops aws \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "aws_db_instance.postgres" \
  -command "aws rds describe-db-instances --db-instance-identifier myapp-db-000000007e"

Database queries

Execute SQL queries against RDS databases:
tensor9 ops db \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "aws_db_instance.postgres" \
  -command "SELECT count(*) FROM users WHERE created_at > NOW() - INTERVAL '24 hours'"

Operations endpoints

Create temporary operations endpoints for interactive access:
# Create kubectl endpoint
tensor9 ops endpoint create \
  -appName my-app \
  -customerName acme-corp \
  -originResourceId "aws_eks_cluster.main_cluster" \
  -endpointType kubectl \
  -ttl 3600

# Output:
# Endpoint created: https://ops.tensor9.io/kubectl/abc123
# Expires in: 1 hour
# Use: kubectl --server=https://ops.tensor9.io/kubectl/abc123 get pods
See Operations for comprehensive operations documentation.

Example: Complete AWS appliance

Here’s a complete example of a Terraform origin stack for an AWS appliance:

main.tf

# VPC
resource "aws_vpc" "main" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name          = "myapp-vpc-${var.instance_id}"
    "instance-id" = var.instance_id
  }
}

# Subnets
resource "aws_subnet" "private" {
  count             = 2
  vpc_id            = aws_vpc.main.id
  cidr_block        = "10.0.${count.index}.0/24"
  availability_zone = data.aws_availability_zones.available.names[count.index]

  tags = {
    Name          = "myapp-private-${count.index}-${var.instance_id}"
    "instance-id" = var.instance_id
  }
}

# EKS Cluster
resource "aws_eks_cluster" "main" {
  name     = "myapp-cluster-${var.instance_id}"
  role_arn = aws_iam_role.cluster.arn
  version  = "1.28"

  vpc_config {
    subnet_ids = aws_subnet.private[*].id
  }

  enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]

  tags = {
    "instance-id" = var.instance_id
  }
}

# RDS PostgreSQL
resource "aws_db_instance" "postgres" {
  identifier           = "myapp-db-${var.instance_id}"
  engine               = "postgres"
  engine_version       = "15.3"
  instance_class       = "db.t3.micro"
  allocated_storage    = 20
  db_name              = "myapp"
  username             = "admin"
  password             = var.db_password
  db_subnet_group_name = aws_db_subnet_group.main.name
  vpc_security_group_ids = [aws_security_group.db.id]
  skip_final_snapshot  = false
  final_snapshot_identifier = "myapp-db-${var.instance_id}-final"

  tags = {
    "instance-id" = var.instance_id
  }
}

# S3 bucket
resource "aws_s3_bucket" "data" {
  bucket = "myapp-data-${var.instance_id}"

  tags = {
    "instance-id" = var.instance_id
  }
}

resource "aws_s3_bucket_versioning" "data" {
  bucket = aws_s3_bucket.data.id

  versioning_configuration {
    status = "Enabled"
  }
}

# ElastiCache Redis
resource "aws_elasticache_cluster" "redis" {
  cluster_id           = "myapp-redis-${var.instance_id}"
  engine               = "redis"
  node_type            = "cache.t3.micro"
  num_cache_nodes      = 1
  parameter_group_name = "default.redis7"
  engine_version       = "7.0"
  port                 = 6379
  subnet_group_name    = aws_elasticache_subnet_group.main.name
  security_group_ids   = [aws_security_group.redis.id]

  tags = {
    "instance-id" = var.instance_id
  }
}

# CloudWatch Log Groups
resource "aws_cloudwatch_log_group" "eks" {
  name              = "/aws/eks/myapp-cluster-${var.instance_id}/cluster"
  retention_in_days = 7

  tags = {
    "instance-id" = var.instance_id
  }
}

# Secrets
resource "aws_secretsmanager_secret" "db_password" {
  name = "${var.instance_id}/prod/db/password"

  tags = {
    "instance-id" = var.instance_id
  }
}

resource "aws_secretsmanager_secret_version" "db_password" {
  secret_id     = aws_secretsmanager_secret.db_password.id
  secret_string = var.db_password
}

variables.tf

variable "instance_id" {
  type        = string
  description = "Uniquely identifies the instance to deploy into"
}

variable "db_password" {
  type        = string
  description = "Database master password"
  sensitive   = true
}

variable "region" {
  type        = string
  description = "AWS region"
  default     = "us-west-2"
}

outputs.tf

output "eks_cluster_endpoint" {
  description = "EKS cluster API endpoint"
  value       = aws_eks_cluster.main.endpoint
}

output "database_endpoint" {
  description = "RDS database endpoint"
  value       = aws_db_instance.postgres.endpoint
  sensitive   = true
}

output "redis_endpoint" {
  description = "Redis cache endpoint"
  value       = aws_elasticache_cluster.redis.cache_nodes[0].address
}

output "data_bucket" {
  description = "S3 data bucket name"
  value       = aws_s3_bucket.data.id
}

Best practices

Every AWS resource with a name or identifier should include ${var.instance_id} to prevent conflicts across customer appliances:
# ✓ CORRECT
resource "aws_s3_bucket" "data" {
  bucket = "myapp-data-${var.instance_id}"
}

resource "aws_iam_role" "lambda" {
  name = "myapp-lambda-${var.instance_id}"
}

# ✗ INCORRECT - Will cause collisions
resource "aws_s3_bucket" "data" {
  bucket = "myapp-data"
}
Apply the instance-id tag to every resource:
tags = {
  "instance-id" = var.instance_id
}
This enables:
  • IAM permission scoping
  • CloudWatch filtering
  • Cost tracking
  • Resource discovery
Configure logging for Lambda, EKS, RDS, and other services:
resource "aws_cloudwatch_log_group" "service_logs" {
  name              = "/aws/service/${var.instance_id}"
  retention_in_days = 14
}
This ensures observability data flows to your control plane.
Never hardcode secrets. Use Secrets Manager with parameterized names and pass them to your application as environment variables:
# Define secret
resource "aws_secretsmanager_secret" "api_key" {
  name = "${var.instance_id}/prod/api/key"

  tags = {
    "instance-id" = var.instance_id
  }
}

# Inject into ECS task as environment variable
resource "aws_ecs_task_definition" "app" {
  family = "myapp-${var.instance_id}"

  container_definitions = jsonencode([
    {
      name  = "app"
      image = "myapp:latest"

      secrets = [
        {
          name      = "API_KEY"
          valueFrom = aws_secretsmanager_secret.api_key.arn
        }
      ]
    }
  ])
}
Pass secrets as environment variables rather than using runtime SDK calls to ensure consistency across all deployment targets.

Troubleshooting

Symptom: Terraform apply fails with “AccessDenied” or “UnauthorizedOperation” errors.Solutions:
  • Verify the Tensor9 controller has successfully assumed the Deploy role
  • Check the Deploy role’s IAM policy includes necessary permissions for the resources being created
  • Ensure the trust policy allows the Tensor9 controller to assume it
  • Verify the DeployAccess tag is set and the time window hasn’t expired
  • Review CloudTrail logs in the customer account to see which specific API call was denied
Symptom: “ResourceAlreadyExists” or “BucketAlreadyExists” errors during deployment.Solutions:
  • Ensure all resource names include ${var.instance_id}
  • Verify the instance_id variable is being passed correctly
  • Check that no hardcoded resource names exist in your origin stack
  • For S3 buckets, remember they must be globally unique - include both app name and instance_id
Symptom: CloudWatch logs and metrics aren’t appearing in your observability sink.Solutions:
  • Verify the Steady-state role has permissions to read CloudWatch logs and metrics
  • Check that all log groups and resources are tagged with instance-id
  • Ensure log group names are parameterized and follow the expected pattern
  • Verify CloudWatch log retention is set (logs may be deleted if retention is too short)
  • Check that the control plane is successfully assuming the Steady-state role
Symptom: “VpcLimitExceeded” error when creating VPCs.Solutions:
  • Ask the customer to request a VPC quota increase from AWS (default is 5 per region)
  • Consider deploying appliances in separate AWS regions
  • Use existing customer VPCs with dedicated subnets instead of creating new VPCs
  • Ask the customer to clean up unused VPCs in their account
Symptom: “InvalidParameterCombination” when enabling encryption on RDS instances.Solutions:
  • Ensure storage_encrypted = true is set when creating the instance
  • Use a customer-managed KMS key if required by customer policy
  • Note that encryption cannot be enabled on existing unencrypted instances - must create new instance
  • Verify the Deploy role has KMS permissions if using customer-managed keys
If you’re experiencing issues not covered here or need additional assistance with AWS deployments, we’re here to help:
  • Slack: Join our community Slack workspace for real-time support
  • Email: Contact us at [email protected]
Our team can help with deployment troubleshooting, IAM configuration, service equivalents, and best practices for AWS environments.

Next steps

Now that you understand deploying to AWS customer environments, explore these related topics: