
Overview
When you deploy an application to Azure customer environments using Tensor9:- Customer appliances run entirely within the customer’s Azure subscription
- Your control plane orchestrates deployments from your dedicated Tensor9 AWS account
- Managed identities and RBAC enable your control plane to manage customer appliances with customer-approved permissions
- Service equivalents compile your origin stack into Azure-native resources
Prerequisites
Before deploying appliances to Azure customer environments, ensure:Your control plane
- Dedicated AWS account for your Tensor9 control plane
- Control plane installed - See Installing Tensor9
- Origin stack published - Your application infrastructure defined and uploaded
Customer Azure subscription
Your customers must provide:- Azure subscription where the appliance will be deployed
- Managed identities configured for the four-phase permissions model (Install, Steady-state, Deploy, Operate)
- Virtual network and networking configured according to their requirements
- Sufficient subscription quotas for your application’s resource needs
- Azure region where they want the appliance deployed
Your development environment
- Azure CLI installed and configured
- kubectl for Kubernetes operations
- Terraform or OpenTofu (if using Terraform origin stacks)
- Docker (if deploying container-based applications)
How Azure appliances work
Azure appliances are deployed using Azure-native services orchestrated by your Tensor9 control plane.1
Customer provisions managed identities
Your customer creates four managed identities in their Azure subscription, each corresponding to a permission phase: Install, Steady-state, Deploy, and Operate. These identities define what the Tensor9 controller can do within their environment.The customer configures RBAC role assignments that allow your control plane to impersonate these managed identities with appropriate conditions (time windows, approval tags, etc.).
2
You create a release for the customer appliance
You create a release targeting the customer’s appliance:Your control plane compiles your origin stack into a deployment stack tailored for Azure, compiling any non-Azure resources to their Azure service equivalents. The deployment stack downloads to your local environment.
3
Customer grants deploy access
The customer approves the deployment by granting temporary deploy access. This can be manual (updating RBAC role assignments) or automated (scheduled maintenance windows).Once approved, the Tensor9 controller in the appliance can use the Deploy managed identity in the customer’s subscription.
4
You deploy the release
You run the deployment locally against the downloaded deployment stack:The deployment stack is configured to route resource creation through the Tensor9 controller inside the customer’s appliance. The controller uses the Deploy managed identity and creates all infrastructure resources in the customer’s Azure subscription:
- Virtual networks, subnets, network security groups
- AKS clusters, Container Instances, Azure Functions
- Azure Database for PostgreSQL/MySQL, Azure Blob Storage, Azure Cache for Redis
- Azure Monitor workspaces, Log Analytics, managed identities, Azure DNS zones
- Any other Azure resources defined in your origin stack
5
Steady-state observability begins
After deployment, your control plane uses the Steady-state managed identity to continuously collect observability data (logs, metrics, traces) from the customer’s appliance without requiring additional approvals.This data flows to your observability sink, giving you visibility into appliance health and performance.
Service equivalents
When you deploy an origin stack to Azure customer environments, Tensor9 automatically compiles resources from other cloud providers to their Azure equivalents.How service equivalents work
When compiling a deployment stack for Azure:- AWS resources are compiled - AWS resources are converted to their Azure equivalents
- Generic resources are adapted - Cloud-agnostic resources (like Kubernetes manifests) are adapted for Azure
- Configuration is adjusted - Resource configurations are modified to match Azure conventions and best practices
Common service equivalents
| Service Category | AWS | Azure Equivalent |
|---|---|---|
| Compute | ECS Fargate | Azure Container Instances (ACI) |
| Lambda | Azure Functions | |
| EKS | AKS (Azure Kubernetes Service) | |
| Storage | S3 | Azure Blob Storage |
| EBS | Azure Managed Disks | |
| Database | RDS PostgreSQL | Azure Database for PostgreSQL |
| RDS Aurora MySQL, RDS MySQL | Azure Database for MySQL | |
| ElastiCache Redis | Azure Cache for Redis | |
| Networking | VPC | Virtual Network (VNet) |
| ALB/NLB/CLB | Azure Load Balancer, Application Gateway | |
| NAT Gateway | Azure NAT Gateway | |
| Route 53 | Azure DNS | |
| Security | KMS | Azure Key Vault |
| IAM Roles | Managed Identities | |
| Observability | CloudWatch Logs | Azure Monitor Logs |
| CloudWatch Metrics | Azure Monitor Metrics | |
| X-Ray | Application Insights |
Some popular AWS services (EC2, DynamoDB, EFS) are not currently supported. See Unsupported AWS services for the full list and recommended alternatives.
Example: Compiling an AWS origin stack
If your origin stack defines a Lambda function:Permissions model
Azure appliances use a four-phase managed identity permissions model that balances operational capability with customer control.The four permission phases
| Phase | Managed Identity | Purpose | Access Pattern |
|---|---|---|---|
| Install | tensor9-install-${instance_id} | Initial setup, major infrastructure changes | Customer-approved, rare |
| Steady-state | tensor9-steadystate-${instance_id} | Continuous observability collection (read-only) | Active by default |
| Deploy | tensor9-deploy-${instance_id} | Deployments, updates, configuration changes | Customer-approved, time-bounded |
| Operate | tensor9-operate-${instance_id} | Remote operations, troubleshooting, debugging | Customer-approved, time-bounded |
Managed identity structure
Each managed identity is created in the customer’s Azure subscription with RBAC role assignments that allow your control plane to use it. Example: Deploy managed identity with conditional access- The customer has granted the Managed Identity Operator role
- Resources being created are tagged with the correct
instance-id - The time window hasn’t expired (enforced via conditional access policies)
- Can read observability data from resources tagged with the appliance’s
instance-id - Cannot modify, delete, or terminate any resources
- Cannot change RBAC role assignments
Deployment workflow with managed identities
1
Customer grants deploy access
Customer approves a deployment by granting the Managed Identity Operator role and setting up conditional access policies. This can be done manually or through automated approval workflows.
2
You execute deployment locally
You run the deployment locally against the downloaded deployment stack:The deployment stack is configured to route resource creation through the Tensor9 controller in the appliance.
3
Controller uses Deploy identity and creates resources
For each resource Terraform attempts to create, the Tensor9 controller inside the appliance uses the Deploy managed identity and creates the resource in the customer’s subscription.All infrastructure changes occur within the customer’s subscription using their Deploy managed identity permissions.
4
Deploy access expires
After the time window expires or the role assignment is removed, the Deploy identity can no longer be used. Your control plane automatically reverts to using only the Steady-state identity for observability.
Networking
Azure appliances use an isolated networking architecture with a Tensor9 controller that manages communication with your control plane.Tensor9 controller VNet
When an appliance is deployed, Tensor9 creates an isolated VNet containing the Tensor9 controller. This VNet is configured with:- Azure NAT Gateway: Provides outbound internet connectivity
- Route to control plane: Establishes a secure channel to your Tensor9 control plane
- No inbound NSG rules: The controller VNet does not accept inbound connections - all communication is outbound-only
- Receive deployments: Deployment stacks are pushed from your control plane to the appliance
- Configure observability pipeline: Set up log, metric, and trace forwarding to your observability sink
- Receive operational commands: Execute remote operations initiated from your control plane
Outbound-only security model
The Tensor9 controller in your customer’s appliance is designed to only make outbound connections and not require ingress ports to be opened in your customer’s network perimeter:Application VNet topology
Your application resources run in their own VNet(s), completely separate from the Tensor9 controller VNet. The application VNet topology is defined entirely by your origin stack - whatever VPC resources you define in your origin stack will be compiled to Azure VNet resources in the appliance. Example: Application VNet with internet-facing load balancer If your origin stack defines an AWS VPC with public subnets and a load balancer, that topology will compile to Azure VNet resources in the customer’s appliance:Resource naming and tagging
All Azure resources should use theinstance_id variable to ensure uniqueness across multiple customer appliances.
Parameterization pattern
Required tags
Tag all resources withinstance-id to enable permissions scoping and observability:
instance-id tag:
- Enables RBAC condition expressions to scope permissions to specific appliances
- Allows Azure Monitor filters to isolate telemetry by appliance
- Helps customers track costs per appliance
- Facilitates resource discovery by Tensor9 controllers
Observability
Azure appliances provide comprehensive observability through Azure Monitor, Log Analytics, and Application Insights.Azure Monitor Logs
Application and infrastructure logs flow to Log Analytics workspaces:Azure Monitor Metrics
Infrastructure metrics are automatically collected:- Virtual Machines: CPU percentage, network in/out, disk operations
- Azure Database for PostgreSQL: Database connections, CPU percent, storage used
- Azure Functions: Execution count, execution units, errors
- AKS: Node CPU/memory, pod counts, API server metrics
- Azure Load Balancer: Data path availability, health probe status, packet count
Application Insights
Enable distributed tracing for Azure Functions and containerized applications:Azure Activity Log
All API calls within the customer’s Azure subscription are logged to Activity Log, providing a complete audit trail of what your control plane does:- Managed identity usage
- Resource creation, modification, deletion
- Permission denials
- Role assignment changes
Artifacts
Azure appliances automatically provision private artifact repositories to store container images and application files deployed by your deployment stacks.Container images (Azure Container Registry)
When you deploy an appliance, Tensor9 automatically provisions a private Azure Container Registry in the customer’s Azure subscription to store your container images. Example: Origin stack with container service Your AWS origin stack references container images from your vendor’s Amazon ECR:- Detects the container image reference in your ECS task definition
- Provisions a private Azure Container Registry in the appliance (e.g.,
myappacr000000007e.azurecr.io) - Copies the container image from your vendor ECR registry to the appliance’s private ACR
- Rewrites the deployment stack to reference the appliance-local registry
- Deploy (tofu apply): Tensor9 copies the container image from your vendor registry to the appliance’s private registry
- Destroy (tofu destroy): Deleting the deployment stack also deletes the copied container artifact from the appliance’s private registry
Function source code
For Lambda functions in your AWS origin stack, Tensor9 automatically handles copying function source code to the customer’s Azure environment:- Provisions a private Azure Storage Account in the appliance for function sources
- Copies the Lambda source archive from your vendor S3 bucket to the appliance’s Storage Account
- Compiles the Lambda function to an Azure Function with the appliance-local source reference
Secrets management
Store secrets in AWS Secrets Manager or AWS Systems Manager Parameter Store in your AWS origin stack, then pass them to your application as environment variables.Secret naming and injection
Always use parameterized secret names and inject them as environment variables:If your application dynamically fetches secrets using AWS SDK calls (e.g.,
boto3.client('secretsmanager').get_secret_value()), those calls will NOT be automatically mapped by Tensor9. Always pass secrets as environment variables.Operations
Perform remote operations on Azure appliances using the Operate managed identity.kubectl on AKS
Execute kubectl commands against AKS clusters:Azure CLI operations
Execute Azure CLI commands:Database queries
Execute SQL queries against Azure Database for PostgreSQL:Operations endpoints
Create temporary operations endpoints for interactive access:Example: Complete Azure appliance
Here’s a complete example of a deployment stack for an Azure appliance, compiled from an AWS origin stack:main.tf
variables.tf
outputs.tf
Best practices
Use instance_id for all resource names
Use instance_id for all resource names
Every Azure resource with a name should include Note: Storage account names must be globally unique and can only contain lowercase letters and numbers (no hyphens).
${var.instance_id} to prevent conflicts across customer appliances:Tag all resources with instance-id
Tag all resources with instance-id
Apply the This enables:
instance-id tag to every resource:- RBAC condition expressions for permission scoping
- Azure Monitor filtering
- Cost tracking
- Resource discovery
Enable Azure Monitor diagnostics for all services
Enable Azure Monitor diagnostics for all services
Configure diagnostics for AKS, Azure Functions, databases, and other services:This ensures observability data flows to your control plane.
Use AWS Secrets Manager for sensitive data
Use AWS Secrets Manager for sensitive data
Never hardcode secrets. Use AWS Secrets Manager or SSM Parameter Store with parameterized names in your AWS origin stack:Pass secrets to your application as environment variables. Runtime SDK calls to fetch secrets are not automatically mapped by Tensor9.
Troubleshooting
Deployment fails with permission errors
Deployment fails with permission errors
Symptom: Terraform apply fails with “AuthorizationFailed” or “Forbidden” errors.Solutions:
- Verify the Tensor9 controller has successfully authenticated with the Deploy managed identity
- Check the Deploy identity’s role assignments include necessary permissions for the resources being created
- Ensure the RBAC conditional access policies allow the operation
- Verify resources are tagged with the correct
instance-id - Review Azure Activity Log in the customer subscription to see which specific API call was denied
Resources fail to create due to naming conflicts
Resources fail to create due to naming conflicts
Symptom: “ResourceExists” or “NameNotAvailable” errors during deployment.Solutions:
- Ensure all resource names include
${var.instance_id} - Verify the
instance_idvariable is being passed correctly - Check that no hardcoded resource names exist in your origin stack
- For storage accounts, remember names must be globally unique and only contain lowercase letters and numbers
- For storage accounts, ensure the name is between 3-24 characters
Observability data not flowing to control plane
Observability data not flowing to control plane
Symptom: Azure Monitor logs and metrics aren’t appearing in your observability sink.Solutions:
- Verify the Steady-state identity has Monitoring Reader and Log Analytics Reader permissions
- Check that all resources are tagged with
instance-id - Ensure diagnostic settings are configured for all resources
- Verify Log Analytics workspace retention is set appropriately
- Check that the control plane is successfully using the Steady-state identity
Subscription quota limits exceeded
Subscription quota limits exceeded
Symptom: “QuotaExceeded” or “OperationNotAllowed” errors when creating resources.Solutions:
- Ask the customer to request quota increases from Azure Portal
- Consider deploying appliances in separate Azure regions
- Review and clean up unused resources in the customer’s subscription
- For virtual machine quotas, consider using different VM sizes
AKS cluster creation fails
AKS cluster creation fails
Symptom: Kubernetes cluster creation times out or fails.Solutions:
- Verify the region supports AKS
- Check that the Kubernetes version is supported in the region
- Ensure the VM SKU is available in the region
- Verify VNet and subnet configuration is correct
- Check that service principal or managed identity has necessary permissions
- Review Azure Service Health for service incidents
Storage account name validation errors
Storage account name validation errors
Symptom: “StorageAccountNameInvalid” errors.Solutions:
- Ensure storage account names only contain lowercase letters and numbers (no hyphens)
- Verify the name is between 3-24 characters
- Check that
${var.instance_id}doesn’t contain invalid characters - Consider shortening the app name prefix if the full name is too long
Need help?
Need help?
If you’re experiencing issues not covered here or need additional assistance with Azure deployments, we’re here to help:
- Slack: Join our community Slack workspace for real-time support
- Email: Contact us at [email protected]
Next steps
Now that you understand deploying to Azure customer environments, explore these related topics:- Permissions Model: Deep dive into the four-phase permissions model
- Deployments: Learn how to create releases and deploy to customer appliances
- Operations: Execute remote operations on Azure appliances
- Observability: Set up comprehensive monitoring and logging
- Terraform Origin Stacks: Write Terraform origin stacks optimized for Azure
