Skip to main content
Docker containers can be used as origin stacks with Tensor9. A Docker container origin stack is simply a container image URI that Tensor9 compiles into complete infrastructure stacks for each appliance across various customer environments.

What is a Docker container origin stack?

A Docker container origin stack is your existing Docker container image. Tensor9 takes your container image URI and automatically generates all the necessary infrastructure (container orchestration, networking, load balancers) to run it in customer environments - whether that’s AWS, Google Cloud, Azure, DigitalOcean, or private Kubernetes clusters. Tensor9 reads your container image metadata directly from the image. If your container includes EXPOSE directives, Tensor9 automatically configures load balancers for those ports. You simply bind your app to the container image URI, and Tensor9 uses this as the blueprint to generate complete deployment stacks for each customer appliance.
Your origin stack should be your existing Docker container. Tensor9 is designed to work with the container images you already have - you don’t need to rebuild your containers just for Tensor9. The goal is to maintain a single container image that works for both your SaaS deployment and private customer deployments.

How Docker container origin stacks work

1

Publish your container image

Your container image must be available in a container registry (Amazon ECR, Docker Hub, GitHub Container Registry, Google Artifact Registry, etc.). Tensor9 will reference this image when creating deployment stacks.
2

Create a release

When you want to deploy to an appliance, you create a release using tensor9 stack release create. During release creation, your control plane compiles your Docker container specification into a complete Terraform deployment stack tailored to the appliance’s cloud environment.The compilation generates different infrastructure based on the appliance’s form factor:
Form FactorGenerated Infrastructure
AWS- ECS Fargate cluster, task definition, and service
- VPC with subnets, internet gateway, and route tables
- Network load balancer with listeners and target groups for each exposed port
- Security groups with ingress rules for exposed ports
- IAM roles with appropriate permissions
- CloudWatch log groups for container logs
Google Cloud- Cloud Run service or Compute Engine instance group
- VPC network and firewall rules
- Load balancer with backend services for each exposed port
- Service accounts with appropriate permissions
- Cloud Logging configuration
Azure- Container Instances or Azure Kubernetes Service
- Virtual network and network security groups
- Load balancer with rules for each exposed port
- Managed identities with appropriate permissions
- Azure Monitor configuration
DigitalOcean- Kubernetes Deployment and Service resources
- Load balancer configuration for each exposed port
- Network policies
Private Kubernetes- Kubernetes Deployment with pod specifications
- Kubernetes Service (LoadBalancer or NodePort) for each exposed port
- Resource requests and limits
The compilation process:
  • Automatically creates appropriate load balancing resources for each exposed port
  • Configures routing to your container based on the cloud provider
  • Sets up security rules to allow inbound traffic on exposed ports
  • Maps container ports to load balancer ports
  • Configures the container orchestration system to run your container
The result is a deployment stack - a Terraform configuration that defines all the infrastructure needed to run your container in the target appliance’s environment.When deployed, the deployment stack will copy your container image from its original registry into the appliance’s container registry.
3

Deploy the deployment stack

Download the compiled deployment stack and deploy it using Terraform or OpenTofu:
# Navigate to the deployment stack directory
cd my-test-appliance

# Initialize Terraform
tofu init

# Deploy the infrastructure
tofu apply
The Terraform deployment creates all the infrastructure resources (container orchestration, load balancer, networking, etc.) automatically and starts your container.
4

Monitor deployment progress

Monitor the deployment using Terraform output and your cloud provider’s console:
# View deployment status
tensor9 report -customerName acme-corp

# View Terraform output
tofu output

# For AWS: Check ECS service
aws ecs describe-services --cluster <cluster-name> --services <service-name>

# For GCP: Check Cloud Run service
gcloud run services describe <service-name>

# For Kubernetes: Check deployment
kubectl get deployments
kubectl get pods
You maintain one container image. Tensor9 compiles it into many deployment stacks (one per appliance), each customized for that appliance’s cloud environment. Each deployment stack is a Terraform configuration that creates the appropriate infrastructure for the target cloud provider.

Prerequisites

Before using Docker as an origin stack, ensure you have:
  • Container image in a registry: Your image must be pushed to a container registry (the deployment stack will copy it to the appliance’s registry)
  • Tensor9 CLI installed: For creating releases
  • Tensor9 API key configured: Set as T9_API_KEY environment variable

Docker container origin stack format

A Docker container origin stack is simply your container image URI:
210620017265.dkr.ecr.us-west-2.amazonaws.com/my-app:latest
Tensor9 reads the image metadata directly from your container image. If the image includes EXPOSE directives, Tensor9 automatically configures load balancers for those ports.
Publishing workflow: You bind your app to a container image URI (typically with the :latest tag). Then, each time you want to release a new version, you simply push a new container image to that same tag. Tensor9 will pull the latest image when you create a release. This means you don’t need to rebind your app every time you update your container - just push the new image and create a release.

Supported registries

Your container image can be in any container registry:
  • Amazon ECR: 123456789.dkr.ecr.us-west-2.amazonaws.com/my-app:latest
  • Docker Hub: docker.io/library/nginx:latest
  • GitHub Container Registry: ghcr.io/myorg/app:latest
  • Google Artifact Registry: us-docker.pkg.dev/project/repo/image:latest
  • Azure Container Registry: myregistry.azurecr.io/app:latest
  • Private registries: Any OCI-compatible registry

Exposed ports (optional)

If your container image includes EXPOSE directives, Tensor9 will automatically configure load balancers for those ports. For example, if your Dockerfile contains:
FROM node:18-alpine
WORKDIR /app
COPY . .
EXPOSE 8080
EXPOSE 8443
CMD ["node", "server.js"]
Tensor9 will automatically:
  • Create a load balancer listener for each exposed port
  • Configure routing from the load balancer to your container
  • Set up security rules to allow inbound traffic on exposed ports
  • Map container ports to external load balancer ports
If your container has no EXPOSE directives, Tensor9 will deploy the container without a load balancer.
Currently, only TCP ports are supported. UDP and other protocols are not yet supported for Docker container origin stacks.

Publishing and deploying

Initial setup (one-time)

1

Build and push your container image

Build your origin Docker image and push it to a registry:
# Example: Push to Amazon ECR
docker build -t my-app:latest .
docker tag my-app:latest 210620017265.dkr.ecr.us-west-2.amazonaws.com/my-app:latest
docker push 210620017265.dkr.ecr.us-west-2.amazonaws.com/my-app:latest
2

Bind the container image to your app

Bind your app to the container image URI:
tensor9 stack bind \
  -appName my-app \
  -stackType DockerContainer \
  -nativeStackId "210620017265.dkr.ecr.us-west-2.amazonaws.com/my-app:latest"
This only needs to be done once per app.

Releasing new versions

Each time you want to release a new version:
1

Push your updated container image

# Build and push new version to the same tag
docker build -t my-app:latest .
docker tag my-app:latest 210620017265.dkr.ecr.us-west-2.amazonaws.com/my-app:latest
docker push 210620017265.dkr.ecr.us-west-2.amazonaws.com/my-app:latest
2

Create a release

tensor9 stack release create \
  -appName my-app \
  -testApplianceName my-test-appliance \
  -vendorVersion "1.0.0"
Your control plane compiles the Docker image URI into a complete Terraform deployment stack tailored to the appliance’s cloud environment.
3

Deploy to your test appliance

Download and deploy the compiled deployment stack:
# Download the deployment stack
cd my-test-appliance

# Deploy with Terraform/OpenTofu
tofu init
tofu apply
4

Access your application

Once deployed, you can access your application through the load balancer endpoint. The exact method depends on the cloud provider:
# For AWS: Get the load balancer DNS
tofu output load_balancer_dns

# For GCP: Get the Cloud Run URL or load balancer IP
tofu output service_url

# For Azure: Get the load balancer IP
tofu output load_balancer_ip

# For Kubernetes: Get the service endpoint
kubectl get services
Your application will be accessible at the endpoint shown, on each exposed port.

Tuning container resources

You can customize the CPU and memory resources allocated to your container by providing a stack tuning document when creating a release. This allows you to adjust resources per deployment without modifying your origin stack.

Creating a stack tuning document

Create a JSON or YAML file that specifies the container resources:
{
  "version": "V1",
  "containerResources": {
    "cpu": "4",
    "memory": "8Gi"
  }
}
CPU format:
  • Whole numbers: "2" (2 CPUs)
  • Millicores: "2000m" (2000 millicores = 2 CPUs)
  • Fractional: "0.5" (half a CPU)
Memory format:
  • Gibibytes: "4Gi" (4 GiB)
  • Gigabytes: "4G" (4 GB)
  • Mebibytes: "4096Mi" (4096 MiB)
  • Megabytes: "4096M" (4096 MB)

Using the stack tuning document

Pass the stack tuning document when creating a release:
tensor9 stack release create \
  -appName my-app \
  -testApplianceName my-test-appliance \
  -vendorVersion "1.0.0" \
  -tuningDoc tuning.json
You can also use YAML format:
version: V1
containerResources:
  cpu: "4"
  memory: "8Gi"
tensor9 stack release create \
  -appName my-app \
  -testApplianceName my-test-appliance \
  -vendorVersion "1.0.0" \
  -tuningDoc tuning.yaml \
  -tuningDocFmt Yaml

When to use resource tuning

Resource tuning is useful when:
  • Different customer tiers: Allocate more resources for enterprise customers
  • Performance optimization: Increase resources for high-load deployments
  • Cost optimization: Reduce resources for development/testing environments
  • Workload requirements: Match resources to specific customer workload patterns
The stack tuning document overrides the default resource allocation for that specific release. You can use different stack tuning documents for different appliances, allowing you to customize resources per customer without changing your origin stack.

Managing secrets

Pass sensitive data to your container as environment variables using secrets defined in the tuning document. This allows you to reference secrets from AWS Secrets Manager or SSM Parameter Store.

Defining secrets in the tuning document

Add a secrets section to your tuning document:
{
  "version": "V1",
  "containerResources": {
    "cpu": "2",
    "memory": "4Gi"
  },
  "secrets": {
    "db_password": {
      "source": "aws_secretsmanager",
      "secretId": "${instance_id}/prod/db/password",
      "environmentVariable": "DB_PASSWORD"
    },
    "api_key": {
      "source": "aws_ssm_parameter",
      "parameter": "/${instance_id}/prod/api/key",
      "environmentVariable": "API_KEY"
    }
  }
}
When you create a release with this tuning document, Tensor9 will automatically fetch the secrets and inject them as environment variables into your container.

Accessing secrets in your application

Your application reads secrets from environment variables:
import os

# Read secrets from environment variables
db_password = os.environ['DB_PASSWORD']
api_key = os.environ['API_KEY']
Pass secrets as environment variables rather than using runtime SDK calls. While boto3.client('secretsmanager').get_secret_value() works natively in AWS appliances, using environment variables ensures your application works consistently across all deployment targets (AWS, Google Cloud, DigitalOcean).

Exposing ports

Tensor9 detects which ports your container exposes by reading the EXPOSE directives in your Dockerfile. When ports are detected, Tensor9 automatically provisions cloud-native load balancers to route traffic to your container.

Defining exposed ports in your Dockerfile

Use the EXPOSE directive in your Dockerfile to declare which ports your application listens on:
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm install

# Declare exposed ports
EXPOSE 8080
EXPOSE 8443

CMD ["node", "server.js"]
When you bind your app to this container image, Tensor9 reads the image metadata and automatically detects that ports 8080 and 8443 need to be exposed.

How Tensor9 provisions load balancers

When Tensor9 detects EXPOSE directives in your container image, it creates infrastructure appropriate for each cloud provider:
EnvironmentContainer OrchestrationLoad BalancerConfiguration
Kubernetes-based (DigitalOcean, Private Kubernetes)Kubernetes DeploymentKubernetes LoadBalancer ServiceService with a port for each exposed port; cloud provider automatically provisions load balancer
AWSECS Fargate ServiceNetwork Load Balancer (NLB)Target groups and listeners for each exposed port
Google CloudCloud Run service or Compute EngineCloud Load Balancing or Cloud RunBackend services for each exposed port (TCP) or automatic HTTPS routing (Cloud Run)
AzureContainer Instances or AKSAzure Load BalancerBackend pools and rules for each exposed port

Accessing your exposed ports

After deployment, retrieve the public endpoint from the cloud provider: For Kubernetes-based environments:
kubectl get service my-app-service

NAME              TYPE           EXTERNAL-IP                                      PORT(S)
my-app-service    LoadBalancer   abc123-lb.us-east-1.elb.amazonaws.com           8080:31234/TCP,8443:31235/TCP
Access your application at http://<EXTERNAL-IP>:8080 and https://<EXTERNAL-IP>:8443. For AWS ECS:
aws elbv2 describe-load-balancers --names my-app-nlb

# Returns NLB DNS name: my-app-nlb-123456.elb.us-east-1.amazonaws.com
Access your application at http://my-app-nlb-123456.elb.us-east-1.amazonaws.com:8080. For Google Cloud Run:
gcloud run services describe my-app --region us-central1

# Returns service URL: https://my-app-abc123-uc.a.run.app
Cloud Run automatically handles HTTPS and routes to your application.

Multiple ports

Expose multiple ports for different purposes by adding multiple EXPOSE directives:
FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt

# Expose multiple ports for different purposes
EXPOSE 8080    # HTTP API
EXPOSE 8443    # HTTPS API
EXPOSE 9090    # Prometheus metrics endpoint

CMD ["python", "app.py"]
Each EXPOSE directive creates a corresponding listener/target group on the load balancer.

Protocol support

Supported:
  • Standard TCP ports (HTTP, HTTPS, custom TCP services)
  • Multiple ports per container
  • Ports in any range (1-65535)
Not supported:
  • UDP protocols (only TCP is supported)
  • Port range specifications in EXPOSE (e.g., EXPOSE 8000-8010)
  • SCTP or other non-TCP protocols
If you need UDP or advanced networking, use a Terraform origin stack with custom Kubernetes manifests.

Best practices for ports

Use standard ports for common protocols:
# Good: Standard HTTP/HTTPS ports
EXPOSE 80
EXPOSE 443
Minimize exposed ports:
  • Each exposed port may incur load balancer costs
  • Only expose ports that need external access from outside the appliance
  • Use a single port with path-based routing when possible
Document what each port does: Add comments in your Dockerfile to explain the purpose of each port:
EXPOSE 8080    # HTTP API - main application endpoint
EXPOSE 8443    # HTTPS API - secure application endpoint
EXPOSE 9090    # Prometheus metrics
EXPOSE 9091    # Health check endpoint
Ensure your application listens on all interfaces: Your application must bind to 0.0.0.0 (all network interfaces), not localhost or 127.0.0.1:
# ✓ CORRECT: Listen on all interfaces
app.run(host='0.0.0.0', port=8080)

# ✗ INCORRECT: Only accessible from within the container
app.run(host='localhost', port=8080)
Consider using an API gateway:
  • For complex routing needs
  • To consolidate multiple services behind a single endpoint
  • To reduce load balancer costs

Internal-only containers

If your container doesn’t need external access (e.g., background workers, queue processors), don’t include any EXPOSE directives in your Dockerfile:
FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt

# No EXPOSE directives = no load balancer provisioned

CMD ["python", "worker.py"]
The container will run but won’t have a public endpoint, reducing infrastructure costs.

Generated infrastructure by form factor

When Tensor9 compiles your Docker container origin stack, it generates infrastructure appropriate for the target cloud provider:
InfrastructureAWSGoogle CloudAzureDigitalOceanPrivate Kubernetes
Container OrchestrationECS Cluster
ECS Task Definition with resource limits
ECS Service (1 replica)
Cloud Run service with automatic scaling
Configured resource limits
Container Instances or AKS cluster
Configured resource limits
Kubernetes Deployment
Resource requests and limits
1 replica
Kubernetes Deployment
Resource requests and limits
1 replica
NetworkingVPC with public subnets
Internet Gateway and route tables
Security groups for exposed ports
VPC network with subnet
Firewall rules for exposed ports
Cloud NAT for outbound connectivity
Virtual network and subnet
Network security groups for exposed ports
Kubernetes Service (LoadBalancer)
Network policies
Kubernetes Service (LoadBalancer or NodePort)
Service ports for exposed ports
Load BalancingNetwork Load Balancer
Target groups per exposed port
Listeners per exposed port
Cloud Load Balancer (HTTP(S) or TCP)
Backend services per exposed port
URL maps and forwarding rules
Azure Load Balancer or Application Gateway
Backend pools per exposed port
Rules per exposed port
DigitalOcean Load Balancer (auto-provisioned)
Service ports per exposed port
Service endpoint (LoadBalancer or NodePort)
Security & LoggingIAM task execution role and task role
CloudWatch log group (90-day retention)
Service account with permissions
Cloud Logging configuration
Managed identity with permissions
Azure Monitor and Log Analytics workspace
Standard Kubernetes RBAC
Container logs
Standard Kubernetes RBAC
Container logs
Container RegistryAmazon ECR in appliance’s AWS accountGoogle Artifact Registry in appliance’s GCP projectAzure Container Registry in appliance’s Azure subscriptionDigitalOcean Container Registry in appliance’s DO accountAppliance-local container registry (bundled with appliance)

Best practices

For Docker container origin stacks, use the :latest tag (or another consistent tag) and push updates to the same tag. This allows you to release new versions by simply pushing a new image and creating a release, without needing to rebind your app. Tensor9’s vendorVersion field in releases provides version tracking.
Docker container origin stacks support a single container per deployment. If your application requires multiple containers (sidecars, service meshes, separate frontend/backend, caching layers, etc.), use a Terraform origin stack instead. Terraform allows you to define complex multi-container architectures using Kubernetes Deployments or ECS task definitions with multiple container specifications.
While Docker container origin stacks provide simple resource tuning via stack tuning documents, use a Terraform origin stack if you need advanced configuration options like auto-scaling policies, custom health check intervals, placement constraints, capacity providers, or fine-grained networking controls. Terraform gives you full control over infrastructure parameters that Docker container origin stacks don’t expose.

Limitations and considerations

Currently, only TCP ports are supported for exposed ports. UDP, SCTP, and other protocols are not yet supported. If your application requires non-TCP protocols, consider using Kubernetes resources embedded in Terraform as your origin stack.
Docker container origin stacks deploy with default resource limits, but you can customize CPU and memory using a stack tuning document (see “Tuning container resources” section above). For more complex resource configurations or different resource types (GPU, ephemeral storage), use Terraform or CloudFormation to define your container infrastructure directly.
Docker container origin stacks support a single container. If you need multi-container deployments (sidecars, service meshes, init containers), embed Kubernetes Deployment resources in a Terraform origin stack instead.

Troubleshooting

Symptom: Container orchestration system shows unhealthy or continuously restarting containers.Cause: Container image not found, incorrect exposed ports, or application crashes on startup.Solution:
  • Verify the container image exists in the registry
  • Check that exposed ports match what your application listens on
  • View container logs: tensor9 report -customerName acme-corp
  • Test the container locally: docker run -p 8080:8080 your-image
  • For Kubernetes: Use kubectl logs and kubectl describe pod to diagnose
  • For cloud services: Check the cloud provider’s console for detailed error messages
Symptom: Load balancer endpoint resolves but connection times out or is refused.Cause: Security rules not configured correctly, or application not listening on the right port.Solution:
  • Verify the EXPOSE directives in your Dockerfile match the ports your application listens on
  • Check security group/firewall rules allow inbound traffic on exposed ports
  • Confirm your application binds to 0.0.0.0 (all interfaces) not localhost or 127.0.0.1
  • Check health check status in the cloud provider’s console
  • For Kubernetes: Use kubectl port-forward to test direct connectivity to the pod
Symptom: Container orchestration fails with image pull errors.Cause: Image doesn’t exist, registry is unreachable, or authentication issues.Solution:
  • Verify the image exists: docker pull your-image
  • Ensure the image is in a publicly accessible registry or properly authenticated
  • Check that the registry is accessible from the appliance’s cloud environment
  • For private registries, verify that registry credentials are configured correctly
  • Review the cloud provider’s logging for detailed error messages about the pull failure
  • Terraform: For custom container infrastructure or multi-container deployments
  • Kubernetes: For embedding Kubernetes resources in Terraform
  • Deployments: How to create releases and deploy
  • Form Factors: Understand different cloud environments