Skip to main content
An origin stack is the blueprint for your application: the infrastructure-as-code, container definitions, or configuration files that define how your application is built and deployed. Your origin stack represents the canonical version of your application that Tensor9 compiles into customer-specific deployment stacks for each appliance.

What is an origin stack?

Your origin stack is the source of truth for your application’s infrastructure and configuration. It contains all the resources, dependencies, and settings needed to run your application. When you publish an origin stack to Tensor9, your control plane uses it as the template to generate deployment stacks tailored to each customer’s specific environment and form factor. Think of your origin stack as the infrastructure-as-code you already use to deploy your SaaS application. It defines everything from compute resources (like Lambda functions, containers, or VMs) to databases, storage buckets, networking configuration, IAM roles, and any other cloud resources your application needs. Tensor9 takes this single origin stack and uses it as a template to generate customized deployments for each customer appliance, adapting the infrastructure to match each customer’s target environment.
Tensor9 uses your existing origin stack as-is. You don’t need to define a new origin stack specifically for Tensor9 deployment - simply publish the same Terraform, Docker Container, Docker Compose, or CloudFormation code you already use to deploy your SaaS product.

Example: A Terraform/OpenTofu origin stack

Consider a typical SaaS application with an API, database, and storage. Your origin stack might look like this:
variable "instance_id" {
  type        = string
  description = "Uniquely identifies the instance to deploy into"
}

# API Lambda function
resource "aws_lambda_function" "api" {
  function_name = "myapp-api-${var.instance_id}"
  handler       = "index.handler"
  runtime       = "nodejs18.x"
  role          = aws_iam_role.api_role.arn

  environment {
    variables = {
      DB_HOST     = aws_db_instance.postgres.endpoint
      BUCKET_NAME = aws_s3_bucket.data.id
      INSTANCE_ID = var.instance_id
    }
  }
}

# PostgreSQL database
resource "aws_db_instance" "postgres" {
  identifier        = "myapp-db-${var.instance_id}"
  engine            = "postgres"
  engine_version    = "15.3"
  instance_class    = "db.t3.micro"
  allocated_storage = 20
  db_name           = "myapp"
  username          = "admin"
  password          = var.db_password
}

# S3 bucket for application data
resource "aws_s3_bucket" "data" {
  bucket = "myapp-data-${var.instance_id}"
}

# IAM role for Lambda
resource "aws_iam_role" "api_role" {
  name = "myapp-api-role-${var.instance_id}"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "lambda.amazonaws.com"
      }
    }]
  })
}
This origin stack defines a complete application. When you publish your origin stack, then create a release of it:
  1. For customer A on AWS: Tensor9 generates a deployment stack that creates these same resources in Customer A’s AWS account`
  2. For customer B on Google Cloud: Tensor9 translates the resources (RDS becomes Cloud SQL, S3 becomes Cloud Storage, Lambda becomes Cloud Functions) and generates a deployment stack for Customer B’s Google Cloud account
  3. For customer C private: Tensor9 translates the resources to use private equivalents (e.g., RDS becomes CloudNativePG, S3 becomes MinIO) and generates a deployment stack for Customer C’s private environment
  4. For a test appliance D: Tensor9 generates a deployment stack that creates these resources in that test appliance in your Tensor9 AWS account`
The origin stack remains unchanged - you maintain a single source of truth while Tensor9 handles the complexity of deploying to multiple customers across different form factors.

Supported stack types

Tensor9 supports multiple infrastructure-as-code and container formats as origin stacks:
Stack TypeExample
Terraform/OpenTofus3://my-bucket/my-tf-workspace.tf.tgz
Docker Container123456789012.dkr.ecr.us-west-2.amazonaws.com/my-app:latest
Docker Composes3://t9-ctrl-000001/my-app-compose.yml
CloudFormationarn:aws:cloudformation:us-west-2:123456789012:stack/my-app-stack/a1b2c3d4
KubernetesManifest or Helm chart embedded in an origin stack of a different type

How origin stacks work

When you create a release for an appliance, Tensor9 performs a compilation process that transforms your origin stack into a deployment stack:
  1. Validation: Your control plane inspects the origin stack to ensure it’s well-formed and meets Tensor9 requirements
  2. Porting: Resources are translated to their equivalents in the target customer’s environment based on the appliance’s form factor
  3. Observability: The stack is instrumented to route logs, metrics, and traces back to your observability sink
  4. Packaging: The result is a deployment stack - a self-contained artifact you deploy using standard tooling

Publishing origin stacks

To make your origin stack available to Tensor9, you publish it to your control plane. The publishing process depends on the stack type:

Publishing a Terraform origin stack

tensor9 stack publish \
  -stackType TerraformWorkspace \
  -stackS3Key my-stack \
  -dir <PATH_TO_YOUR_TF_ROOT>
This command:
  • Compresses your Terraform workspace into a .tf.tgz archive
  • Uploads it to your control plane’s S3 bucket
  • Returns a native stack id you’ll use to bind the origin stack to your app
Example output:
Creating archive of .tf files in /path/to/your/terraform
Uploading /tmp/my-stack.tf.tgz to s3://t9-ctrl-000001/terraform-stacks/origins/my-stack.tf.tgz
Uploading to s3://t9-ctrl-000001/terraform-stacks/origins/my-stack.tf.tgz...........100%
Successfully uploaded stack. The native stack ID is s3://t9-ctrl-000001/terraform-stacks/origins/my-stack.tf.tgz
In the above example, the stack’s native stack id is: s3://t9-ctrl-000001/terraform-stacks/origins/my-stack.tf.tgz Pass this native stack id into the tensor9 stack bind command to bind this Terraform origin stack to your app.

Publishing a Docker container origin stack

For Docker containers, you push your container image to your Tensor9 AWS account’s Elastic Container Registry (ECR):
# Authenticate to ECR
aws ecr get-login-password --region <REGION> | docker login --username AWS --password-stdin <ACCOUNT>.dkr.ecr.<REGION>.amazonaws.com

# Tag your image
docker tag my-app:latest <ACCOUNT>.dkr.ecr.<REGION>.amazonaws.com/my-app:latest

# Push to ECR
docker push <ACCOUNT>.dkr.ecr.<REGION>.amazonaws.com/my-app:latest
In the above example, the stack’s native stack id is: <ACCOUNT>.dkr.ecr.<REGION>.amazonaws.com/my-app:latest Pass this native stack id into the tensor9 stack bind command to bind this Docker container origin stack to your app.

Publishing a Docker Compose origin stack

For Docker Compose applications, you publish your docker-compose.yml file to your control plane:
tensor9 stack publish \
  -stackType DockerCompose \
  -stackS3Key my-app-compose \
  -file docker-compose.yml
This command:
  • Uploads your docker-compose.yml file to your control plane’s S3 bucket
  • Returns a native stack id you’ll use to bind the origin stack to your app
Example output:
Uploading docker-compose.yml to s3://t9-ctrl-000001/my-app-compose.yml
Successfully uploaded stack. The native stack ID is s3://t9-ctrl-000001/my-app-compose.yml
In the above example, the stack’s native stack id is: s3://t9-ctrl-000001/my-app-compose.yml Pass this native stack id into the tensor9 stack bind command to bind this Docker Compose origin stack to your app. When you create a release, Tensor9 compiles your docker-compose.yml file into a complete Terraform deployment stack with Kubernetes resources. Services with exposed ports get LoadBalancer services, while internal services use ClusterIP. All container images are automatically copied to the appliance’s container registry.

Using a CloudFormation origin stack

For CloudFormation stacks, you manage and deploy your stack using the AWS CLI, not tensor9 stack publish. Tensor9 references your existing CloudFormation stack and uses its template as your origin stack. First, deploy your CloudFormation stack using the AWS CLI:
aws cloudformation create-stack \
  --stack-name my-app-stack \
  --template-body file://my-template.yaml \
  --region us-west-2
Once deployed, get the stack ARN:
aws cloudformation describe-stacks \
  --stack-name my-app-stack \
  --region us-west-2 \
  --query 'Stacks[0].StackId' \
  --output text
This returns your stack ARN, which is your native stack id: arn:aws:cloudformation:us-west-2:123456789012:stack/my-app-stack/a1b2c3d4 You’ll use this ARN to bind the CloudFormation origin stack to your app. Tensor9 will use the template from this stack as the blueprint for generating deployment stacks.

Binding an origin stack to an app

After the first time you publish an origin stack, you must bind it to your app. Binding registers the stack with your app so you can create releases:
tensor9 stack bind \
  -appName my-app \
  -stackType TerraformWorkspace \
  -nativeStackId <YOUR_NATIVE_STACK_ID>
Important: You only need to bind once per app. Future publishes of the same stack (with updated code) don’t require re-binding.

Multiple origin stacks per app

Some applications consist of multiple independently deployable components. You can bind multiple origin stacks to a single app:
# Bind the API stack
tensor9 stack bind \
  -appName my-app \
  -stackType TerraformWorkspace \
  -nativeStackId s3://your-bucket/api-stack.tf.tgz

# Bind the worker stack
tensor9 stack bind \
  -appName my-app \
  -stackType TerraformWorkspace \
  -nativeStackId s3://your-bucket/worker-stack.tf.tgz

Origin stack requirements

To work with Tensor9, your origin stack must meet certain requirements:

Terraform/OpenTofu requirements

  • Valid Terraform: Your configuration must be valid and pass tofu validate
  • Backend Configuration: You can optionally include backend configuration in your origin stack. Tensor9 preserves any backend configuration you provide, giving you control over state management. See Backend Configuration for details.
  • Root Module Location: If your root module is in a subdirectory within the archive, specify the path using // notation:
    • s3://your-bucket/your-tf-workspace.tf.tgz - root module at archive root
    • s3://your-bucket/your-tf-workspace.tf.tgz//infrastructure/terraform - root module in infrastructure/terraform/ subdirectory
  • Instance ID Variable: Your stack should accept an instance_id variable that Tensor9 injects to uniquely identify each appliance:
variable "instance_id" {
  type        = string
  description = "Uniquely identifies the instance to deploy into"
}

resource "aws_s3_bucket" "data" {
  bucket = "my-app-data-${var.instance_id}"
  # ...
}

Docker container requirements

  • Platform Support: Containers must support the linux/amd64 architecture
  • Registry Access: Images must be accessible from your Tensor9 AWS account’s ECR
  • Stateless Design: Follow container best practices for stateless, immutable deployments

Docker Compose requirements

  • Valid Compose File: Your docker-compose.yml must be valid (v2.x or v3.x format)
  • Container Images in Registries: All images referenced in your compose file must be pushed to container registries before creating a release
  • Named Volumes Only: Use named volumes for persistent storage (bind mounts are not supported)
  • External Secrets: Secrets must be defined as external: true and pre-created in the appliance namespace
  • No Build Directive: The build: directive is not supported - all services must reference pre-built images

CloudFormation requirements

  • Valid Template: Your CloudFormation template must be valid and deployable
  • Deployed in Tensor9 AWS Account: The CloudFormation stack must be deployed in your Tensor9 AWS account
  • Parameterized Resources: Use Parameters to make resource names unique per appliance (similar to instance_id in Terraform)
  • No Nested Stacks: Nested CloudFormation stacks are not supported

Updating origin stacks

When you need to release changes to your application, publish a new version of your origin stack:
  1. Make Changes: Update your infrastructure code or container image
  2. Publish: Run tensor9 stack publish (for Terraform) or push a new container image
  3. Release: Create a release targeting the appliances you want to update
The new origin stack version becomes the source for all future releases. Previously deployed releases continue to run their original stack version until you deploy a new release.

Best practices

Your origin stack must be designed to support multiple independent deployments without resource name collisions. This is called “parameterization” - making your infrastructure unique per appliance instance.When you deploy the same origin stack to multiple customer appliances, each deployment must create its own isolated copy of every resource. Without parameterization, multiple deployments would attempt to create resources with identical names, causing conflicts and failures.Tensor9 automatically injects an instance_id variable into every deployment. You must use this variable to make all resource identifiers unique:
variable "instance_id" {
  type        = string
  description = "Uniquely identifies the instance to deploy into"
}

# ✓ CORRECT: Resource names include instance_id
resource "aws_s3_bucket" "data" {
  bucket = "myapp-data-${var.instance_id}"
}

resource "aws_db_instance" "postgres" {
  identifier = "myapp-db-${var.instance_id}"
}

# ✓ CORRECT: Secret paths include instance_id
data "aws_secretsmanager_secret_version" "api_key" {
  secret_id = "${var.instance_id}/prod/api/key"
}

# ✗ INCORRECT: Hard-coded names will cause collisions
resource "aws_s3_bucket" "data" {
  bucket = "myapp-data"  # Multiple deployments will conflict
}
What needs to be parameterized:
  • Resource names and identifiers (S3 buckets, databases, Lambda functions, etc.)
  • Secret paths in external secret stores
  • Log group names
  • IAM role and policy names
  • Any other globally unique identifiers
Without proper parameterization, attempting to deploy to multiple appliances will result in resource creation failures as Terraform tries to create duplicate resources.
Always test new origin stack versions in test appliances before releasing to customer appliances:
  1. Publish the new origin stack
  2. Create a release to a test appliance
  3. Deploy and validate the release against the test appliance
  4. Create a new release for customer appliances
  5. Deploy the release to customer appliances

Next steps

Now that you understand origin stacks, explore these related topics: