How We Deployed and Scaled on Azure: A Production Playbook
Moving to Azure Container Apps changed how fast we could ship. This is the exact setup we run at Fygurs — container registry, environment-based scaling, and automated deployments triggered by a GitHub push — with the lessons from doing it wrong first.
Before we moved to Azure Container Apps, deploying a service meant SSHing into a VM and hoping the environment matched staging. It wasn't a process — it was a ritual. This is the exact Azure setup we run at Fygurs: container registry, Container Apps with internal ingress between services, and a GitHub Actions pipeline that deploys on merge. Zero manual steps.
Azure Cloud Platform
Microsoft Azure provides on-demand cloud computing services. Instead of managing physical servers, you rent compute, storage, and networking resources that scale automatically based on demand.
Key Benefits
- Pay-per-use: Only pay for resources you consume
- Global availability: Deploy to 60+ regions worldwide
- Managed services: Azure handles patching, scaling, and maintenance
- Security: Built-in compliance, encryption, and identity management
Azure Resource Groups
A Resource Group is a logical container that holds related Azure resources. All resources in a solution (containers, databases, storage) are grouped together for unified management.
RESOURCE GROUP STRUCTURE ┌──────────────────────────────────────────────────┐ │ Resource Group: my-project │ │ │ │ ┌────────────┐ ┌────────────┐ ┌────────────┐ │ │ │ Container │ │ Container │ │ Blob │ │ │ │ App 1 │ │ App 2 │ │ Storage │ │ │ └────────────┘ └────────────┘ └────────────┘ │ │ │ │ ┌────────────┐ ┌────────────┐ ┌────────────┐ │ │ │ Container │ │ PostgreSQL │ │ RabbitMQ │ │ │ │ Registry │ │ Database │ │ Service │ │ │ └────────────┘ └────────────┘ └────────────┘ │ │ │ └──────────────────────────────────────────────────┘
Creating a Resource Group
az group create \
--name my-project \
--location eastus
Resource Group Benefits
- Lifecycle management: Delete all resources by deleting the group
- Access control: Apply RBAC at the group level
- Cost tracking: Monitor spending per resource group
- Tagging: Organize resources with metadata tags
- Locks: Prevent accidental deletion of critical resources
Azure Container Registry
Azure Container Registry (ACR) is a managed Docker registry service for storing and managing container images. It integrates seamlessly with Azure Container Apps and Kubernetes.
CONTAINER REGISTRY FLOW
┌─────────────┐ ┌─────────────────┐ ┌─────────────┐
│ GitHub │ │ Azure │ │ Container │
│ Actions │─────▶│ Container │─────▶│ Apps │
│ (CI/CD) │ push │ Registry │ pull │ (Runtime) │
└─────────────┘ └─────────────────┘ └─────────────┘
│
┌──────┴──────┐
│ Images │
│ - api:v1 │
│ - web:v2 │
│ - worker │
└─────────────┘
Service Tiers
- Basic: Entry-level for development and testing
- Standard: Production workloads with higher throughput
- Premium: Geo-replication, content trust, private endpoints
Security Features
- TLS 1.2: Encrypted image transfers
- Microsoft Entra ID: Identity-based authentication
- Service principals: Automated CI/CD authentication
- RBAC: Granular permission control
- Image scanning: Microsoft Defender integration
Creating a Container Registry
az acr create \
--resource-group my-project \
--name myregistry \
--sku Standard
Pushing Images
az acr login --name myregistry
docker tag my-app:latest myregistry.azurecr.io/my-app:v1
docker push myregistry.azurecr.io/my-app:v1
Azure Container Apps
Azure Container Apps is a serverless container platform that runs microservices without managing infrastructure. It handles scaling, load balancing, and HTTPS certificates automatically.
CONTAINER APPS ARCHITECTURE
┌──────────────────────────────────────────────┐
│ Container Apps Environment │
│ │
│ ┌─────────┐ ┌─────────┐ │
│ │ API │ TCP │ Worker │ │
│ │ Gateway │───────▶│ Service │ │
│ │ :443 │ │ :3001 │ │
│ └─────────┘ └────┬────┘ │
│ │ │ │
│ │ TCP │ AMQP │
│ ▼ ▼ │
│ ┌─────────┐ ┌─────────┐ │
│ │ Auth │ │ RabbitMQ│ │
│ │ Service │ │ :5672 │ │
│ │ :3002 │ └─────────┘ │
│ └─────────┘ │
│ │
└──────────────────────────────────────────────┘
▲
│ HTTPS
│
Internet
Key Features
- Serverless scaling: Scale to zero or up to hundreds of replicas
- KEDA integration: Scale based on HTTP traffic, CPU, memory, or queue depth
- Built-in ingress: HTTPS and TCP without additional configuration
- Service discovery: Internal DNS for microservice communication
- Traffic splitting: Blue/green deployments and A/B testing
- Dapr integration: Microservices patterns out of the box
Ingress Configuration
Ingress controls how traffic reaches your container apps. It defines whether the app is publicly accessible or only reachable within the environment.
INGRESS TYPES
External Ingress Internal Ingress
(Public Access) (Private Access)
Internet Container Apps Environment
│ ┌─────────────────────────┐
│ HTTPS │ │
▼ │ Service A ───▶ Service B
┌───────────────┐ │ │ │
│ API Gateway │ │ └────────▶ Service C
│ (external) │ │ │
└───────────────┘ └─────────────────────────┘
External Ingress
Exposes the app to the public internet. Azure provides a fully qualified domain name (FQDN) and automatic TLS certificates. Use --ingress external for API gateways and web apps.
Internal Ingress
Only accessible within the Container Apps environment. Other apps reach it via internal DNS using the app name. Use --ingress internal for microservices.
Internal services are accessed by name: http://auth-service or tcp://auth-service:3002
Transport Protocols
- HTTP/HTTPS: Default for web applications, automatic TLS termination
- TCP: Required for NestJS microservices, RabbitMQ, PostgreSQL, Redis. Use
--transport tcp
Cold Start Problem
When a Container App scales to zero, the next request must wait for the container to start. This cold start can take 15-30 seconds, making scale-to-zero unsuitable for user-facing APIs.
COLD START TIMELINE
Scale to Zero First Request Container Ready
│ │ │
▼ ▼ ▼
─────●──────────────────────●──────────────────────●─────▶
│ │ │
│ Idle Time │ 15-30 seconds │
│ (no cost) │ (user waiting) │
The solution is to maintain at least one replica running at all times with --min-replicas 1. Response times drop to ~100-150ms. The idle cost for the smallest configuration (0.25 CPU / 512MB) remains under $10/month, a reasonable trade-off for eliminating cold starts on production APIs.
Creating a Container App
az containerapp create \
--name api-gateway \
--resource-group my-project \
--environment my-environment \
--image myregistry.azurecr.io/api:v1 \
--target-port 3000 \
--ingress external \
--min-replicas 1 \
--max-replicas 10 \
--registry-server myregistry.azurecr.io \
--registry-username $REGISTRY_USERNAME \
--registry-password $REGISTRY_PASSWORD \
--env-vars \
DATABASE_URL=$DATABASE_URL \
JWT_SECRET=secretref:jwt-secret
Internal Microservice (TCP)
az containerapp create \
--name auth-service \
--resource-group my-project \
--environment my-environment \
--image myregistry.azurecr.io/auth:v1 \
--target-port 3002 \
--ingress internal \
--transport tcp \
--min-replicas 1 \
--max-replicas 5
GitHub Actions CD Pipeline
Continuous Deployment automates the process of building, pushing, and deploying containers whenever code changes are merged.
CD PIPELINE FLOW
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ GitHub │ │ Build │ │ Push │ │ Deploy │
│ Push │───▶│ Docker │───▶│ to ACR │───▶│ to ACA │
│ │ │ Image │ │ │ │ │
└──────────┘ └──────────┘ └──────────┘ └──────────┘
│
▼
┌──────────────────────────────────────────────────────────┐
│ GitHub Secrets │
│ AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID │
│ REGISTRY_SERVER, REGISTRY_USERNAME, REGISTRY_PASSWORD │
│ DATABASE_URL, JWT_SECRET, RABBITMQ_HOST, ... │
└──────────────────────────────────────────────────────────┘
Workflow Structure
name: CD
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
env:
AZURE_CONTAINER_REGISTRY: ${{ secrets.AZURE_CONTAINER_REGISTRY }}
AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 20
- name: Install dependencies
run: npm install
- name: Login to Azure
uses: azure/login@v2
with:
creds: '{"clientId":"${{ secrets.AZURE_CLIENT_ID }}","clientSecret":"${{ secrets.AZURE_CLIENT_SECRET }}","tenantId":"${{ secrets.AZURE_TENANT_ID }}"}'
- name: Build Docker image
run: |
docker build --platform=linux/amd64 \
-t my-service \
-f apps/my-service/Dockerfile .
docker tag my-service \
${{ env.AZURE_CONTAINER_REGISTRY }}/my-service:${{ github.sha }}
- name: Login to ACR
run: |
az acr login --name ${{ env.REGISTRY_USERNAME }}
- name: Push to ACR
run: |
docker push ${{ env.AZURE_CONTAINER_REGISTRY }}/my-service:${{ github.sha }}
- name: Deploy to Container Apps
run: |
az containerapp update \
--name my-service \
--resource-group my-project \
--image ${{ env.AZURE_CONTAINER_REGISTRY }}/my-service:${{ github.sha }}
Use az containerapp create for the initial deployment, then az containerapp update for subsequent deployments to update the image.
Image Tagging Strategy
- ${{ github.sha }}: Unique commit hash for traceability
- latest: Always points to most recent build
- v1.0.0: Semantic versioning for releases
Best Practices
Security
- Never hardcode secrets: Use GitHub Secrets and Azure secret references
- Internal ingress: Expose only the API gateway publicly
- Service principals: Use dedicated identities for CI/CD
- TLS everywhere: Azure provides automatic HTTPS certificates
Cost Optimization
- Scale to zero: Use for background jobs and scheduled tasks, not user-facing APIs (cold starts)
- Right-size resources: Start small, scale based on metrics
- Resource groups: Tag and monitor costs per environment
Reliability
- Multiple replicas: Minimum 2 replicas for production
- Health probes: Configure liveness and readiness checks
- Revision management: Keep previous revisions for quick rollback
Conclusion
Azure's container services provide a complete platform for deploying microservices. Resource Groups organize infrastructure, Container Registry stores images securely, and Container Apps runs workloads with automatic scaling. Combined with GitHub Actions, you get a fully automated deployment pipeline from code push to production.
The serverless nature of Container Apps means you focus on application code while Azure handles infrastructure concerns. Internal networking keeps microservices secure, while external ingress exposes only what needs to be public. This architecture scales from development to production without changing deployment patterns.
Related Reading
These posts extend the Azure deployment patterns covered here:
- Shipping Faster with Automated Pipelines: CI/CD with GitHub Actions — the GitHub Actions workflow that triggers the Azure Container Apps deployments described in this guide.
- Scaling Applications in the Cloud: Kubernetes and GitOps in Practice — how Kubernetes and ArgoCD complement Azure for teams that need declarative, auditable cluster management.
See the Fygurs and cloud infrastructure projects to explore the production systems this Azure setup powers.
Written by

Technical Lead and Full Stack Engineer leading a 5-engineer team at Fygurs (Paris, Remote) on Azure cloud-native SaaS. Graduate of 1337 Coding School (42 Network / UM6P). Writes about architecture, cloud infrastructure, and engineering leadership.