Skip to main content
On-premises and bare metal deployments allow your customers to run your application on physical servers in their own data centers or co-location facilities. These environments provide maximum control over hardware, networking, and security, making them ideal for customers with strict data residency, compliance, or performance requirements.

Overview

On-premises and bare metal deployments use Kubernetes as the underlying orchestration platform. Your customer installs and manages their own Kubernetes cluster on their infrastructure, and then Tensor9 deploys your application into that cluster following the same pattern as Private Kubernetes deployments. This approach gives customers full control over:
  • Hardware: Physical servers, storage, and networking equipment
  • Infrastructure: Data center location, network topology, and security perimeter
  • Kubernetes distribution: Choice of Kubernetes implementation and version
  • Operational practices: Backup, disaster recovery, and maintenance schedules

Prerequisites

Before deploying to on-premises or bare metal environments, your customer must:

Customer’s infrastructure

  • Kubernetes cluster: A working Kubernetes cluster installed and configured on their infrastructure (see Kubernetes distributions below)
  • Two namespaces: One for the Tensor9 controller, one for your application
  • Network connectivity: Outbound HTTPS access for the Tensor9 controller to communicate with your control plane
  • Container registry: Access to a container registry (public or private) for pulling container images
  • Storage: Persistent storage solution compatible with Kubernetes (local storage, NFS, SAN, etc.)

Your development environment

  • kubectl installed and configured
  • Helm installed (required for customer controller installation)
  • Terraform or OpenTofu (if using Terraform origin stacks with Kubernetes provider)

Kubernetes distributions

Customers can choose from various Kubernetes distributions for their on-premises or bare metal infrastructure:
DistributionBest ForNotes
Vanilla KubernetesMaximum flexibility and controlRequires manual setup and management
K3sEdge computing, resource-constrained environmentsLightweight, single binary, simplified architecture
MicroK8sDeveloper workstations, small clustersUbuntu-optimized, snap-based installation
RKE/RKE2Rancher users, enterprise environmentsIntegrated with Rancher management platform
OpenShiftRed Hat environments, enterprise supportEnterprise Kubernetes with additional tooling
Tanzu Kubernetes GridVMware environmentsVMware’s enterprise Kubernetes platform
The customer is responsible for installing, configuring, and maintaining their chosen Kubernetes distribution.

Service equivalents

When you deploy an origin stack to on-premises environments, Tensor9 automatically compiles resources from your AWS origin stack to their Kubernetes equivalents. Since on-premises deployments use Kubernetes as the underlying platform, the service equivalents are identical to Private Kubernetes environments.

Common service equivalents

Service CategoryAWSOn-Prem / Bare Metal Equivalent
ContainersEKS, ECSKubernetes
FunctionsLambdaKnative (unmanaged)
NetworkingVPC-
Load balancingLoad BalancerCloudflare (optional)
DNSRoute 53Cloudflare (optional)
Identity and access managementIAM-
Object storageS3Backblaze B2, MinIO (unmanaged)
Databases (PostgreSQL)RDS Aurora PostgreSQL, RDS PostgreSQLNeon, CloudNative PostgreSQL (unmanaged)
Databases (MySQL)RDS Aurora MySQL, RDS MySQLPlanetScale, MySQL (unmanaged)
Databases (MongoDB)DocumentDBMongoDB Atlas, MongoDB (unmanaged)
CachingElastiCacheRedis Enterprise Cloud, Redis (unmanaged)
Message streamingMSK (Managed Streaming for Kafka)Confluent Cloud, Kafka (unmanaged)
SearchOpenSearch ServiceOpenSearch (unmanaged)
WorkflowMWAA (Managed Airflow)Astronomer, Airflow (unmanaged)
AnalyticsAmazon AthenaPresto (unmanaged)
Third-party managed equivalents (Backblaze B2, Neon, PlanetScale, MongoDB Atlas, Redis Enterprise Cloud, Confluent Cloud, Astronomer) require your customers to bring their own credentials and accounts with these services.
Some popular AWS services (EC2, DynamoDB, EFS) are not currently supported. See Unsupported AWS services for the full list and recommended alternatives.
For detailed service equivalent mappings and examples, see Service Equivalents or the Private Kubernetes service equivalents section.

How it works

Once your customer has a Kubernetes cluster running on their infrastructure, the deployment process follows the same workflow as Private Kubernetes environments:
1

Customer installs Kubernetes cluster

Your customer provisions physical servers and installs their chosen Kubernetes distribution following the vendor’s installation guide. This includes:
  • Setting up control plane nodes
  • Joining worker nodes to the cluster
  • Configuring networking (CNI plugin)
  • Setting up storage classes for persistent volumes
  • Configuring ingress for external traffic
2

Customer sets up Tensor9 environment

After the Kubernetes cluster is running, your customer follows the standard Private Kubernetes setup:
  • Creates two namespaces (one for controller, one for application)
  • Installs the Tensor9 controller via Helm chart
  • Creates ServiceAccounts with appropriate RBAC permissions
  • Configures network access for the controller to reach your control plane
3

Deploy your application

You create releases and deploy your application following the standard deployment workflow, identical to Private Kubernetes deployments.

Deployment workflow

After the Kubernetes cluster is set up, all deployment steps are identical to Private Kubernetes environments. See the Private Kubernetes documentation for complete details on:

Considerations for on-prem deployments

While the Tensor9 deployment process is identical to Private Kubernetes, on-premises environments have unique infrastructure considerations:

Hardware and capacity planning

Customers need to provision sufficient hardware resources:
  • Compute: CPU and memory for application workloads
  • Storage: Persistent volumes for databases and stateful services
  • Network: Bandwidth for application traffic and data transfer
  • High availability: Multiple nodes for redundancy and fault tolerance

Networking

On-premises networking often requires additional configuration:
  • Load balancing: External load balancer or MetalLB for Kubernetes Services
  • DNS: Internal DNS records or external DNS management
  • Firewall rules: Outbound HTTPS access for Tensor9 controller
  • TLS certificates: SSL/TLS certificates for ingress endpoints

Storage

Persistent storage options for on-premises Kubernetes:
  • Local storage: Direct-attached storage on worker nodes (fast but not highly available)
  • NFS: Network File System for shared storage across nodes
  • SAN: Storage Area Network for enterprise environments
  • Ceph/Rook: Software-defined storage for cloud-native storage management
  • Longhorn: Cloud-native distributed block storage

Maintenance and operations

Customers are responsible for:
  • Kubernetes upgrades: Planning and executing cluster upgrades
  • Node maintenance: Patching OS, replacing failed hardware
  • Backup and disaster recovery: Protecting cluster state and application data
  • Monitoring: Infrastructure monitoring (server health, disk space, network)
  • Security: Physical security, network security, access control

Best practices

If possible, create a test appliance using the same Kubernetes distribution your customer will use in production. Different distributions may have subtle differences in behavior or available features.
Provide clear hardware requirements and capacity planning guidance for your application. Include minimum and recommended specifications for CPU, memory, storage, and network bandwidth.
Some on-premises environments have restricted internet access. Ensure your deployment process accounts for:
  • Air-gapped container image distribution
  • Limited or scheduled connectivity windows
  • On-premises artifact mirrors
Create detailed operational documentation for customers managing their own infrastructure:
  • Troubleshooting common issues
  • Performance tuning guidelines
  • Backup and restore procedures
  • Scaling recommendations

Troubleshooting

If you’re experiencing issues or need assistance with on-premises and bare metal deployments, we’re here to help:
  • Slack: Join our community Slack workspace for real-time support
  • Email: Contact us at [email protected]
Our team can help with deployment troubleshooting, configuration, and best practices for on-premises and bare metal environments.