Skip to main content
Kubernetes resources can be used with Tensor9 by embedding them within Terraform or CloudFormation origin stacks. Unlike other infrastructure-as-code formats that Tensor9 supports, Kubernetes manifests cannot be used as standalone origin stacks - they must be embedded within a parent origin stack.

What is a Kubernetes origin stack?

A Kubernetes origin stack consists of Kubernetes resources (Deployments, Services, ConfigMaps, etc.) defined within a Terraform or CloudFormation origin stack using the respective provider’s Kubernetes resources. Key characteristic: Kubernetes resources are always embedded within another origin stack format. The parent origin stack (Terraform or CloudFormation) serves as the container, while Kubernetes manifests define the workload orchestration.
Your Kubernetes resources should be part of your existing Terraform or CloudFormation configuration. Tensor9 is designed to work with the infrastructure-as-code you already have - you don’t need to rewrite your Kubernetes deployments just for Tensor9. The goal is to maintain a single stack that works for both your SaaS deployment and private customer deployments.

Why embed Kubernetes?

Kubernetes manifests define how your application runs (pods, deployments, services), but they don’t provision the underlying infrastructure (clusters, networks, load balancers). By embedding Kubernetes within Terraform or CloudFormation, you get:
  • Complete infrastructure: The parent stack provisions the cluster (EKS, GKE, AKS) and supporting infrastructure
  • Unified deployment: One release process deploys both infrastructure and workloads
  • Automatic artifact handling: Tensor9 automatically copies container images to customer environments
  • Form factor adaptation: Kubernetes workloads adapt to different cloud environments seamlessly

How Kubernetes origin stacks work

1

Define Kubernetes resources in Terraform/CloudFormation

In your Terraform or CloudFormation origin stack, use the Kubernetes provider to define your Kubernetes resources. For Terraform, this typically means using kubernetes_manifest or kubernetes_deployment resources:Terraform example:
resource "kubernetes_manifest" "my_app_deployment" {
  manifest = {
    apiVersion = "apps/v1"
    kind       = "Deployment"
    metadata = {
      name      = "my-app"
      namespace = "default"
    }
    spec = {
      replicas = 3
      selector = {
        matchLabels = {
          app = "my-app"
        }
      }
      template = {
        metadata = {
          labels = {
            app = "my-app"
          }
        }
        spec = {
          containers = [
            {
              name  = "my-app"
              image = "myregistry.io/my-app:v1.0.0"
              ports = [
                { containerPort = 8080 }
              ]
            }
          ]
        }
      }
    }
  }
}
2

Publish and create a release

Publish your parent origin stack (Terraform or CloudFormation) to your control plane. When you create a release for an appliance, your control plane:
  1. Finds Kubernetes resources: Scans the origin stack for kubernetes_manifest, kubernetes_deployment, and other Kubernetes provider resources
  2. Extracts container images: Identifies all container image references in Kubernetes specs
  3. Prepares image copying: Configures the deployment stack to copy images to the appliance’s container registry (specific to the cloud provider)
  4. Rewrites image references: Updates container image fields to point to the locally-copied images
  5. Compiles the deployment stack: Generates a ready-to-deploy stack with all Kubernetes resources intact
The result is a deployment stack that includes both your infrastructure and Kubernetes workloads. When deployed, the deployment stack will copy the container images into the customer’s appliance.
3

Deploy to the appliance

Deploy the compiled deployment stack using the parent stack’s tooling:For Terraform deployment stacks:
cd acme-corp-production
tofu init
tofu apply
For CloudFormation deployment stacks: Your control plane automatically creates the CloudFormation stack in your control plane’s AWS account. Monitor deployment using:
tensor9 report -customerName acme-corp
During deployment, the Kubernetes resources are applied to the customer’s cluster with all container images pointing to the locally-copied versions.

Supported Kubernetes resources

Tensor9 automatically handles artifact copying for these Kubernetes resource types:

kubernetes_manifest (Terraform)

The kubernetes_manifest resource accepts any Kubernetes manifest as a map. Tensor9 automatically detects and processes:
  • Deployments (kind: "Deployment"): Extracts container images from spec.template.spec.containers[].image
  • Other workload types: Support for additional resource types is expanding
Example:
resource "kubernetes_manifest" "nginx" {
  manifest = {
    apiVersion = "apps/v1"
    kind       = "Deployment"
    metadata = {
      name = "nginx"
    }
    spec = {
      template = {
        spec = {
          containers = [
            {
              name  = "nginx"
              image = "docker.io/nginx:1.21"  # Automatically copied
            }
          ]
        }
      }
    }
  }
}

kubernetes_deployment (Terraform)

The kubernetes_deployment resource provides typed Kubernetes Deployment support. Tensor9 extracts container images from the deployment spec. Example:
resource "kubernetes_deployment" "app" {
  metadata {
    name = "my-app"
  }
  spec {
    template {
      spec {
        container {
          name  = "app"
          image = "ghcr.io/myorg/app:v1.0.0"  # Automatically copied
        }
      }
    }
  }
}

Other Kubernetes provider resources

Tensor9 supports other Kubernetes provider resources (Services, ConfigMaps, Secrets, etc.). These resources pass through compilation unchanged, as they typically don’t reference external artifacts.

Helm charts

Helm charts can be deployed using the Terraform Helm provider. Helm is a package manager for Kubernetes that bundles multiple Kubernetes resources into a single deployable unit called a “chart.”
1

Add the Helm provider to your Terraform configuration

Include the Helm provider in your required_providers block:
terraform {
  required_providers {
    helm = {
      source  = "hashicorp/helm"
      version = "~> 3.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.38"
    }
  }
}
2

Configure the provider to connect to your Kubernetes cluster

Configure the Helm provider to use your cluster’s endpoint and credentials:
provider "helm" {
  kubernetes {
    host                   = aws_eks_cluster.main.endpoint
    cluster_ca_certificate = base64decode(aws_eks_cluster.main.certificate_authority[0].data)
    token                  = data.aws_eks_cluster_auth.main.token
  }
}
3

Deploy charts using helm_release resources

Define helm_release resources to deploy Helm charts:
resource "helm_release" "nginx_ingress" {
  name       = "nginx-ingress"
  repository = "https://kubernetes.github.io/ingress-nginx"
  chart      = "ingress-nginx"
  namespace  = "default"
  version    = "4.10.1"

  set {
    name  = "controller.service.type"
    value = "LoadBalancer"
  }

  depends_on = [aws_eks_cluster.main]
}

How Helm charts are compiled

1

Helm resources pass through

The helm_release resource is included in the deployment stack unchanged.
2

Charts deploy at runtime

When you run tofu apply on the deployment stack, Terraform installs the Helm chart into the cluster.
3

Container images in charts

Container images referenced within Helm charts are pulled from their original registries at deployment time.
Currently, Tensor9 does not automatically copy container images referenced within Helm charts to the appliance’s container registry. This means Helm charts must be able to pull images from their original registries (e.g., Docker Hub, GitHub Container Registry). If your appliance cannot reach external registries, consider using Kubernetes manifests directly instead of Helm charts, or pre-pull images and reference them from a local registry.

Best practices for Helm charts

  • Pin chart versions: Always specify a version in your helm_release to ensure consistent deployments
  • Use accessible registries: Ensure Helm charts reference images from registries the appliance can reach
  • Test chart deployments: Validate Helm charts in test appliances before deploying to customer appliances
  • Configure chart values: Use set or values blocks to customize chart behavior for each environment

Container image handling

Tensor9 automatically handles container images referenced in Kubernetes resources:

Automatic image copying

When Tensor9 finds a container image in a Kubernetes resource, it:
  1. Validates the image reference: Ensures the image has a registry (e.g., docker.io, ghcr.io, myregistry.io)
  2. Configures image copying: Prepares the deployment stack to copy the image to the appliance’s container registry
  3. Rewrites the reference: Updates the Kubernetes manifest to point to the locally-copied image
Before compilation (origin stack):
spec = {
  containers = [
    {
      image = "docker.io/nginx:1.21"
    }
  ]
}
After compilation (deployment stack):
spec = {
  containers = [
    {
      # Now points to the customer's ECR or appliance registry
      image = "123456789.dkr.ecr.us-west-2.amazonaws.com/t9-app-images:nginx-1.21-abc123"
    }
  ]
}

Image registry requirements

For Tensor9 to copy a container image, it must include a registry in the image reference:
  • Supported: docker.io/nginx:latest, ghcr.io/myorg/app:v1.0, myregistry.io/image:tag
  • ⚠️ Skipped: nginx:latest (no registry - assumed to be publicly available in the appliance)
Images without a registry are assumed to be publicly available from Docker Hub and are not copied. If your appliance cannot reach Docker Hub, make sure to include the registry prefix: docker.io/nginx:latest instead of nginx:latest.

Where images are stored

Copied container images are stored in the appliance’s container registry. The storage location is determined by the appliance’s form factor and is handled automatically by Tensor9:
Appliance EnvironmentContainer Registry
AWSAmazon ECR
Google CloudGoogle Artifact Registry
AzureAzure Container Registry
DigitalOceanDigitalOcean Container Registry
Private KubernetesAppliance-local container registry

Prerequisites

Before using Kubernetes resources in your origin stack:

For Terraform origin stacks

  • Kubernetes provider configured: Include the Kubernetes Terraform provider in your required_providers
  • Cluster access configured: The Kubernetes provider must be configured to connect to your cluster (typically via EKS, GKE, or AKS data sources)
  • Valid Kubernetes manifests: Your Kubernetes resources must be valid according to the Kubernetes API
Example Terraform configuration:
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.38"
    }
  }
}

# Configure Kubernetes provider using EKS cluster
data "aws_eks_cluster" "cluster" {
  name = aws_eks_cluster.main.name
}

data "aws_eks_cluster_auth" "cluster" {
  name = aws_eks_cluster.main.name
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

For CloudFormation origin stacks

Support for Kubernetes resources in CloudFormation origin stacks is under development. Currently, Terraform is the recommended approach for embedding Kubernetes resources.
If you have a CloudFormation + Kubernetes use case, please reach out to [email protected] to discuss your requirements.

Example: Complete Terraform + Kubernetes origin stack

This example shows a complete Terraform origin stack that provisions an EKS cluster and deploys a Kubernetes application:
# Configure providers
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.38"
    }
  }
}

provider "aws" {
  region = "us-west-2"
}

# Create EKS cluster and supporting infrastructure
# (VPC, subnets, IAM roles, etc. - see AWS documentation)
resource "aws_eks_cluster" "main" {
  name     = "my-app-cluster"
  role_arn = aws_iam_role.eks_cluster.arn

  vpc_config {
    subnet_ids = [aws_subnet.private_1.id, aws_subnet.private_2.id]
  }
}

resource "aws_eks_node_group" "main" {
  cluster_name    = aws_eks_cluster.main.name
  node_group_name = "main-nodes"
  node_role_arn   = aws_iam_role.eks_nodes.arn
  subnet_ids      = [aws_subnet.private_1.id, aws_subnet.private_2.id]

  scaling_config {
    desired_size = 2
    max_size     = 4
    min_size     = 1
  }
}

# Configure Kubernetes provider
data "aws_eks_cluster" "main" {
  name = aws_eks_cluster.main.name
}

data "aws_eks_cluster_auth" "main" {
  name = aws_eks_cluster.main.name
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.main.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.main.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.main.token
}

# Deploy application to Kubernetes
resource "kubernetes_manifest" "app_deployment" {
  manifest = {
    apiVersion = "apps/v1"
    kind       = "Deployment"
    metadata = {
      name      = "my-app"
      namespace = "default"
    }
    spec = {
      replicas = 3
      selector = {
        matchLabels = {
          app = "my-app"
        }
      }
      template = {
        metadata = {
          labels = {
            app = "my-app"
          }
        }
        spec = {
          containers = [
            {
              name  = "app"
              # This image will be automatically copied to the customer's ECR
              image = "ghcr.io/myorg/my-app:v1.0.0"
              ports = [
                { containerPort = 8080 }
              ]
              env = [
                {
                  name  = "DATABASE_URL"
                  value = "postgresql://..."
                }
              ]
            }
          ]
        }
      }
    }
  }
}

resource "kubernetes_manifest" "app_service" {
  manifest = {
    apiVersion = "v1"
    kind       = "Service"
    metadata = {
      name      = "my-app"
      namespace = "default"
    }
    spec = {
      type = "LoadBalancer"
      selector = {
        app = "my-app"
      }
      ports = [
        {
          port       = 80
          targetPort = 8080
        }
      ]
    }
  }
}

Publishing and deploying

Publishing and deploying a Kubernetes origin stack follows the same workflow as any Terraform origin stack:

1. Publish your origin stack

tensor9 stack publish \
  -stackType TerraformWorkspace \
  -stackS3Key my-kubernetes-app \
  -dir /path/to/terraform
This uploads your entire Terraform workspace (including Kubernetes resources) to your control plane.

2. Bind the stack to your app

tensor9 stack bind \
  -appName my-app \
  -nativeStackId "s3://bucket/my-kubernetes-app.tf.tgz"
Only needed once per app.

3. Create a release

tensor9 stack release create \
  -appName my-app \
  -testApplianceName my-test-appliance \
  -vendorVersion "1.0.0"
Your control plane compiles the origin stack, automatically copying container images and rewriting references.

4. Deploy the deployment stack

cd my-test-appliance
tofu init
tofu apply
This creates the EKS cluster, node groups, and deploys your Kubernetes workloads. The deployment stack will copy container images into the customer’s appliance.

Best practices

Include the full registry in your container image references (docker.io/nginx:latest instead of nginx:latest). This ensures Tensor9 can copy the images to customer environments.
Avoid using :latest tags in production. Use specific version tags (v1.0.0, sha-abc123) to ensure consistent deployments across customer appliances.
When configuring the Kubernetes provider to connect to your cluster, use data sources (like aws_eks_cluster) rather than hardcoded values. This ensures the provider configuration adapts to each appliance’s cluster.
Store secrets in AWS Secrets Manager in your origin stack, then pass them to your Kubernetes pods as environment variables. Avoid using Kubernetes Secrets for sensitive data, as Tensor9 does not automatically map runtime SDK calls to fetch secrets across different cloud environments.
# Define secret in AWS Secrets Manager
resource "aws_secretsmanager_secret" "db_password" {
  name = "${var.instance_id}/prod/db/password"
}

# Reference in Kubernetes Deployment as environment variable
resource "kubernetes_deployment" "app" {
  spec {
    template {
      spec {
        container {
          env {
            name = "DB_PASSWORD"
            value_from {
              secret_key_ref {
                name = aws_secretsmanager_secret.db_password.name
                key  = "password"
              }
            }
          }
        }
      }
    }
  }
}
For non-sensitive configuration, Kubernetes ConfigMaps are appropriate.
Always specify resource requests and limits for containers. This ensures proper scheduling and prevents resource contention in customer clusters.

Limitations and considerations

Kubernetes YAML files cannot be used as standalone origin stacks. They must be embedded within Terraform or CloudFormation. This is by design - Kubernetes defines workloads, not the underlying infrastructure.
The parent origin stack (Terraform/CloudFormation) must provision or reference the Kubernetes cluster before defining Kubernetes resources. Use proper dependency management (depends_on in Terraform) to ensure correct ordering.
Ensure your Kubernetes provider version is compatible with your target cluster version. Different Kubernetes versions may have different API schemas for resources.
If your container images require authentication to pull, you’ll need to configure image pull secrets in your Kubernetes manifests. Tensor9 copies the images but doesn’t automatically create pull secrets.

Troubleshooting

Symptom: Deployment fails because container image cannot be pulled from the original registry.Cause: Image reference doesn’t include a registry, or Tensor9 couldn’t detect the image reference.Solution:
  • Ensure your image reference includes the full registry: docker.io/nginx:latest
  • Check that the image is referenced in a supported field (spec.containers[].image)
  • Review compilation logs for warnings about skipped images
Symptom: Terraform apply fails with “unable to connect to Kubernetes cluster” error.Cause: Kubernetes provider is not correctly configured to connect to the cluster.Solution:
  • Verify the cluster exists before applying Kubernetes resources
  • Use data sources to dynamically configure the provider
  • Check that IAM roles/permissions allow cluster access
Symptom: Kubernetes deployment is created but pods fail to start with ImagePullBackOff.Cause: Pods cannot pull the container image from the appliance registry.Solution:
  • Verify that the deployment stack shows rewritten image references
  • Check that the node group has permissions to pull from ECR (for AWS)
  • Ensure the image was successfully copied during compilation