How deployments work
Deployments in Tensor9 follow a three-stage process: publish, release, and deploy.1
Publish your origin stack
First, you publish your origin stack to your control plane. This makes your origin stack available for release:This command uploads your infrastructure code to your control plane’s artifact storage and returns a native stack ID (e.g.,
s3://t9-ctrl-000001/my-stack.tf.tgz).Important: You only need to bind your stack to your app once using tensor9 stack bind. After the initial bind, you can publish new versions without re-binding.2
Create a release (compilation)
Next, you create a release for a specific appliance. This triggers your control plane to compile your origin stack into a deployment stack tailored to that appliance’s form factor:During compilation, your control plane:
- Validates your origin stack to ensure it’s well-formed
- Ports cloud-specific resources to match the appliance’s form factor (e.g., AWS RDS → Google Cloud SQL)
- Instruments the stack for observability (logs, metrics, traces)
- Identifies artifacts (container images, S3 objects) and rewrites references to point to appliance-local locations
- Generates a deployment stack - a ready-to-deploy infrastructure-as-code artifact
./my-test-appliance/). For CloudFormation origin stacks, the control plane automatically creates the deployment stack in your control plane’s AWS account.3
Deploy to the appliance
Finally, you deploy the compiled stack. During deployment, Tensor9 copies any referenced artifacts (container images, S3 objects) to the appliance’s local environment. The deployment process depends on your stack type:For Terraform/OpenTofu:
You deploy the compiled stack using standard tooling:For CloudFormation:
Your control plane automatically creates the CloudFormation stack in your control plane’s AWS account. You can monitor deployment progress using:The deployment executes in the target appliance, creating all the infrastructure resources your application needs.
Creating releases for different appliance types
The release creation process differs slightly depending on whether you’re deploying to a test appliance or a customer appliance.Releasing to test appliances
Test appliances are environments you control, used for validation before production deployments:Releasing to customer appliances
Customer appliances are production environments running in your customer’s infrastructure. To create a release for a customer appliance, use the customer name:tensor9 report.
Version management
The-vendorVersion parameter allows you to track which version of your application is deployed to each appliance. This should match your internal versioning scheme (e.g., semantic versioning):
tensor9 report, making it easy to track which version is deployed where.
Deployment stack structure
The deployment stack structure depends on your origin stack type:Terraform/OpenTofu deployment stacks
After creating a release, Tensor9 downloads a deployment stack into a directory named after your appliance:CloudFormation deployment stacks
For CloudFormation origin stacks, your control plane automatically creates the compiled deployment stack as a CloudFormation stack in your control plane’s AWS account when you create a release. There’s no local directory to download - the stack is created and managed directly in CloudFormation.Compilation process
When you create a release, your control plane compiles your origin stack through several transformation steps:Service equivalents
During compilation, Tensor9 compiles cloud-specific services in your origin stack to their functional equivalents in the target appliance’s environment. This compilation is based on a service equivalents registry that maps services across cloud providers.For a comprehensive guide to service equivalents, including detailed examples and best practices, see Service Equivalents.
| Service Category | AWS | Google Cloud | Azure | DigitalOcean | On-Prem |
|---|---|---|---|---|---|
| Containers | EKS, ECS | GKE | AKS | DOK | Kubernetes |
| Functions | Lambda | Cloud Functions | Azure Functions | Functions | Knative (unmanaged) |
| Networking | VPC | VPC | VNet | - | - |
| Load balancing | Load Balancer | Load Balancer | Load Balancer | Load Balancer | Cloudflare (optional) |
| DNS | Route 53 | Cloud DNS | Azure DNS | DigitalOcean DNS | Cloudflare (optional) |
| Identity and access management | IAM | IAM | IAM | - | - |
| Object storage | S3 | Cloud Storage (GCS) | Azure Blob Storage | Spaces | Backblaze B2, MinIO (unmanaged) |
| Databases (PostgreSQL) | RDS Aurora PostgreSQL, RDS PostgreSQL | Cloud SQL PostgreSQL | Azure Database for PostgreSQL | Managed PostgreSQL | Neon, CloudNative PostgreSQL (unmanaged) |
| Databases (MySQL) | RDS Aurora MySQL, RDS MySQL | Cloud SQL MySQL | Azure Database for MySQL | Managed MySQL | PlanetScale, MySQL (unmanaged) |
| Databases (MongoDB) | DocumentDB | Atlas MongoDB | Cosmos DB (MongoDB API) | Managed MongoDB | MongoDB Atlas, MongoDB (unmanaged) |
| Caching | ElastiCache | Memorystore | Azure Cache for Redis | Managed Redis | Redis Enterprise Cloud, Redis (unmanaged) |
| Message streaming | MSK (Managed Streaming for Kafka) | Confluent Cloud, Kafka (unmanaged) | Event Hubs (Kafka compatible) | Confluent Cloud, Kafka (unmanaged) | Confluent Cloud, Kafka (unmanaged) |
| Search | OpenSearch Service | OpenSearch (unmanaged) | OpenSearch (unmanaged) | OpenSearch (unmanaged) | OpenSearch (unmanaged) |
| Workflow | MWAA (Managed Airflow) | Cloud Composer | Azure Data Factory | Astronomer, Airflow (unmanaged) | Astronomer, Airflow (unmanaged) |
| Analytics | Amazon Athena | BigQuery | Azure Synapse Analytics | Presto (unmanaged) | Presto (unmanaged) |
Third-party managed equivalents (Backblaze B2, Neon, PlanetScale, MongoDB Atlas, Redis Enterprise Cloud, Confluent Cloud, Astronomer) require your customers to bring their own credentials and accounts with these services.
Some popular AWS services (EC2, DynamoDB, EFS) are not currently supported. See Unsupported AWS services for the full list and recommended alternatives.
Parameterization
Tensor9 automatically injects aninstance_id variable into every deployment to ensure resource uniqueness:
Artifact identification and rewriting
During compilation, Tensor9 identifies artifact references (container images, S3 objects) in your origin stack and rewrites them to point to appliance-local locations:tofu apply). This ensures your artifacts are available locally within the appliance without requiring cross-account permissions.
Observability instrumentation
Your control plane configures telemetry routing so logs, metrics, and traces flow back to your observability sink:Deploying updates
To deploy changes to an existing appliance, publish a new version of your origin stack and create a new release:- Make changes to your origin stack (add resources, update configurations, etc.)
- Publish the updated origin stack:
- Create a new release with an incremented version:
- Deploy the update:
tensor9 report -customerName <CUSTOMER_NAME> or by viewing CloudFormation stack events.
Testing strategy
Always test releases in test appliances before deploying to customer appliances:1. Create a test appliance
2. Deploy and validate in test
3. Deploy to production after validation
Multi-stack deployments
Some applications consist of multiple independently deployable components. You can bind multiple origin stacks to a single app and deploy them separately:Integration with CI/CD
Tensor9 integrates with standard CI/CD tools and practices. Here’s an example GitHub Actions workflow:Backend configuration
Tensor9 does not modify backend configuration in your origin stack. Any backend configuration you include in your origin stack is preserved in the compiled deployment stack, giving you full control over Terraform state management. You are responsible for managing backend configuration for your deployments. You can include backend configuration directly in your origin stack, or provide it at deployment time: Option 1: Include in origin stackinstance_id variable to ensure each appliance has its own state file.
Option 2: Provide at deployment time
Rollback and recovery
Tensor9 uses a roll forward approach to recovery. If a deployment fails or causes issues, you recover by deploying a new release based on a previous working version of your origin stack.Example: Recovering from a broken deployment
Suppose you deployed version 1.4.6 and it caused issues. To recover, you roll forward to version 1.4.7 with the previous working configuration. In this example:- Version 1.4.5 was the last working version
- Version 1.4.6 was deployed and caused issues
- Version 1.4.7 is the new release that restores the 1.4.5 configuration
1
Restore your origin stack to the previous working state
Update your origin stack to the state before the problematic changes in 1.4.6. This could mean:
- Checking out the git commit from version 1.4.5 (the last working version)
- Reverting the problematic changes in your repository
- Restoring from a backup of your infrastructure code
2
Create a new release with an incremented version
Create a new release from the restored origin stack. Note that this is version 1.4.7, not 1.4.5 - you’re rolling forward, not backward:
3
Deploy the new release
Deploy the new release to the appliance:For Terraform/OpenTofu:Terraform will compute the difference between the current state (1.4.6) and the desired configuration (1.4.7, which restores 1.4.5’s config), reverting any changes introduced in the failed 1.4.6 release.For CloudFormation:
The control plane automatically updates the CloudFormation stack when you create the release. Monitor the rollback progress using
tensor9 report or CloudFormation stack events. CloudFormation will compute the changes needed to restore the working configuration.Alternative recovery approaches
For emergency recovery with Terraform/OpenTofu, you can also:- Re-deploy a previous deployment stack directory if you’ve retained it (bypasses compilation but uses known-good deployment stack)
- Use Terraform state management (
tofu state pull,tofu state push) to manually revert state (advanced users only)
- Use CloudFormation stack rollback features in the AWS console or CLI to revert to a previous stack state
- View stack change sets to understand what changes were applied in each release
Monitoring deployments
Track deployment status and health using several tools:Tensor9 report
Terraform output
After deployment, view outputs defined in your origin stack:Observability sink
Once deployed, your appliance forwards logs, metrics, and traces to your configured observability sink. Monitor application health in your preferred tool (Datadog, New Relic, etc.).Best practices
Use semantic versioning
Use semantic versioning
Adopt a consistent versioning scheme for the
-vendorVersion parameter:- Major version (1.0.0 → 2.0.0): Breaking changes
- Minor version (1.0.0 → 1.1.0): New features, backward compatible
- Patch version (1.0.0 → 1.0.1): Bug fixes
Test before production
Test before production
Always create and validate releases in test appliances before deploying to customer appliances. This catches issues early and reduces customer-facing incidents.
Use descriptive release notes
Use descriptive release notes
Include meaningful descriptions and notes with every release:This creates an audit trail and makes it easy to understand what changed in each release.
Maintain deployment runbooks
Maintain deployment runbooks
Document your deployment process, including:
- Required backend configuration
- Post-deployment validation steps
- Rollback procedures
- Contact information for escalations
Automate deployments
Automate deployments
Use CI/CD pipelines to automate the publish → release → deploy workflow. This reduces manual errors and ensures consistent deployments across all appliances.
Keep deployment stacks
Keep deployment stacks
Archive deployment stack directories after successful deployments. This allows quick rollbacks and serves as a historical record of what was deployed.
Troubleshooting
Release creation fails
Release creation fails
Symptom:
tensor9 stack release create fails with compilation errors.Solutions:- Run
tofu validateon your origin stack to catch syntax errors - Check that all required variables are defined
- Verify artifact references (container images, S3 objects) are accessible
- Review Tensor9 logs for specific compilation errors
Deployment fails during tofu apply
Deployment fails during tofu apply
Symptom:
tofu apply fails with resource creation errors.Solutions:- Check that the appliance has necessary permissions (IAM roles, service accounts)
- Verify resource names don’t conflict (use
instance_idvariable) - Check cloud provider quotas (e.g., VPC limits, compute limits)
- Review Terraform error output for specific resource failures
Appliance not receiving deployment
Appliance not receiving deployment
Symptom: Release created but deployment doesn’t occur.For Terraform/OpenTofu - deployment stack doesn’t download:
- Verify appliance is in “Live” status using
tensor9 report - Check network connectivity between your environment and control plane
- Ensure API key is valid:
echo $T9_API_KEY - Wait a few minutes - compilation can take time for large stacks
- Verify the control plane has necessary permissions to create CloudFormation stacks
- Check CloudFormation events in your control plane’s AWS account for errors:
aws cloudformation describe-stack-events --stack-name <stack-name> - Verify the release was successfully created:
tensor9 report -customerName <CUSTOMER_NAME> - Check AWS service quotas for CloudFormation stacks in your control plane’s account
Backend configuration issues
Backend configuration issues
Symptom:
tofu init fails with backend errors.Solutions:- Ensure backend configuration is provided (either in origin stack or at deployment time)
- If using CLI arguments, verify all required backend parameters are specified
- Ensure state bucket exists and is accessible
- Check that backend configuration uses
${var.instance_id}for unique state paths per appliance
Next steps
Now that you understand deployments, explore these related topics:- Observability: Monitor deployed appliances
- Operations: Perform day-2 operations on deployments
- Testing: Advanced testing strategies
- Atlantis/Spacelift Integration: Automate Terraform deployments