Storing and Serving Files in the Cloud: Azure Blob Storage in Practice
File handling in a microservices environment is harder than it looks — upload limits, streaming, access control, and durability all need to be designed upfront. This covers how we built a scalable document management system on Azure Blob Storage.
Managing file uploads in a microservices architecture requires a robust, scalable storage solution. Azure Blob Storage provides enterprise-grade object storage that integrates seamlessly with NestJS. This guide shows you how to build a production-ready PDF management system with upload, download, update, and delete operations.
1. Why Azure Blob Storage?
When building applications that handle file uploads, you have several options: local filesystem, database BLOBs, or cloud object storage. Here's why Azure Blob Storage stands out:
- Scalability: Store petabytes of data without managing infrastructure
- Cost-Effective: Pay only for what you use, with multiple storage tiers
- Global CDN: Serve files from edge locations worldwide
- Security: Built-in encryption, access control, and compliance certifications
- Integration: First-class SDK support for Node.js/TypeScript
In this guide, we'll build a system where companies can upload, view, update, and delete PDF documents—all stored in Azure Blob Storage and managed through a NestJS microservices architecture.
2. Setup & Configuration
Install Dependencies
npm install @azure/storage-blob
npm install @nestjs/platform-express
npm install uuid
Environment Variables
Create a .env file with your Azure credentials:
AZURE_STORAGE_CONNEXION_STRING=DefaultEndpointsProtocol=https;AccountName=...
AZURE_STORAGE_CONTAINER_NAME=company-documents
Security Note: Never commit connection strings to version control. Use Azure Key Vault or environment-specific secrets management in production.
3. The Service Layer
The service handles all Azure Blob Storage operations and communicates with other microservices via RabbitMQ.
Core Architecture
import { BlobServiceClient, BlockBlobClient } from '@azure/storage-blob';
import { Injectable, Inject } from '@nestjs/common';
import { ClientProxy } from '@nestjs/microservices';
import { v4 as uuidv4 } from 'uuid';
@Injectable()
export class AzureBlobService {
constructor(
@Inject('AUTH') private authMicroservice: ClientProxy
) {}
private azureConnection = process.env.AZURE_STORAGE_CONNEXION_STRING;
getBlobClient(fileName: string, containerName: string): BlockBlobClient {
const blobServiceClient = BlobServiceClient.fromConnectionString(
this.azureConnection
);
const containerClient = blobServiceClient.getContainerClient(containerName);
return containerClient.getBlockBlobClient(fileName);
}
}
Upload Operation
The upload method generates a unique filename, uploads the file buffer, and returns the public URL:
async upload(
file: Express.Multer.File,
containerName: string
): Promise {
// Generate unique filename to prevent collisions
const uniqueFileName = uuidv4() + file.originalname;
const blobClient = this.getBlobClient(uniqueFileName, containerName);
// Upload file buffer directly to Azure
await blobClient.uploadData(file.buffer);
// Return the public URL
return blobClient.url;
}
Key Pattern: We prepend a UUID to the original filename. This prevents filename collisions while preserving the original extension for proper MIME type detection.
Download Operation (Streaming)
Instead of loading the entire file into memory, we stream it directly to the client:
async getFile(fileName: string, containerName: string) {
const blobClient = this.getBlobClient(fileName, containerName);
const blobDownloaded = await blobClient.download();
// Return a readable stream, not a buffer
return blobDownloaded.readableStreamBody;
}
Why Streaming? For large PDFs (10MB+), buffering the entire file wastes memory and slows response time. Streaming sends chunks as they're read from Azure, reducing latency and memory usage.
Delete Operation
async deleteFile(fileName: string, containerName: string) {
const blobClient = this.getBlobClient(fileName, containerName);
await blobClient.deleteIfExists();
}
4. The Controller Layer
The controller exposes REST endpoints for file operations and integrates with Swagger for API documentation.
Upload Endpoint
@Post('company/:companyId/upload')
@UseInterceptors(FileInterceptor('file'))
@ApiConsumes('multipart/form-data')
@ApiBody({
schema: {
type: 'object',
properties: {
file: { type: 'string', format: 'binary' }
}
}
})
async uploadCompanyPdf(
@Param('companyId') companyId: string,
@UploadedFile() file: Express.Multer.File
) {
if (!file) {
throw new HttpException('No file provided', HttpStatus.BAD_REQUEST);
}
const fileUrl = await this.azureBlobService.upload(
file,
this.containerName
);
// Store metadata in database via microservice
await this.azureBlobService.uploadCompanyPdf({
fileUrl,
companyId,
fileName: file.originalname
});
return { message: 'PDF uploaded successfully' };
}
Pattern Breakdown:
FileInterceptor('file')- Parses multipart/form-data@ApiConsumes- Tells Swagger this is a file upload@ApiBody- Shows a file input in Swagger UI- Upload to Azure, then store metadata in database
Download Endpoint (Streaming)
@Get('company/:companyId')
async getCompanyPdf(
@Param('companyId') companyId: string,
@Res() res: Response
) {
// Get file metadata from database
const company = await this.azureBlobService.getCompanyPdf(companyId);
if (!company?.fileUrl) {
throw new HttpException('No PDF found', HttpStatus.NOT_FOUND);
}
// Extract blob name from URL and decode
const blobName = company.fileUrl.split('/').pop();
const decodedBlobName = decodeURIComponent(blobName);
// Get readable stream from Azure
const fileStream = await this.azureBlobService.getFile(
decodedBlobName,
this.containerName
);
// Set headers for inline PDF viewing
res.set({
'Content-Type': 'application/pdf',
'Content-Disposition': `inline; filename="${company.fileName}"`
});
// Pipe stream directly to response
fileStream.pipe(res);
}
Critical Detail: We use decodeURIComponent() because Azure URLs encode special characters. If the original filename was "Report (2024).pdf", the blob name becomes "Report%20%282024%29.pdf".
Update Endpoint
Updating a file requires deleting the old blob and uploading the new one:
@Put('company/:companyId/update')
@UseInterceptors(FileInterceptor('file'))
async updateCompanyPdf(
@Param('companyId') companyId: string,
@UploadedFile() file: Express.Multer.File
) {
const company = await this.azureBlobService.getCompanyPdf(companyId);
if (!company) {
throw new HttpException('Company not found', HttpStatus.NOT_FOUND);
}
// Delete old file from Azure
if (company.fileUrl) {
const oldFileName = company.fileUrl.split('/').pop();
const decodedBlobName = decodeURIComponent(oldFileName);
await this.azureBlobService.deleteFile(decodedBlobName, this.containerName);
}
// Upload new file
const newFileUrl = await this.azureBlobService.upload(file, this.containerName);
// Update database metadata
await this.azureBlobService.updateCompanyPdf({
fileUrl: newFileUrl,
companyId
});
return { message: 'PDF updated successfully' };
}
5. Microservices Integration
In a microservices architecture, the API Gateway (where files are uploaded) is separate from the Auth service (where company metadata is stored). We use RabbitMQ for communication:
async uploadCompanyPdf(createResultDto: CreateResultDto): Promise {
const pattern = { cmd: 'createResultPdf' };
const payload = createResultDto;
const res = await this.authMicroservice
.send(pattern.cmd, payload)
.toPromise();
return res;
}
Flow:
- API Gateway receives file upload
- Upload file to Azure Blob Storage
- Send metadata (fileUrl, companyId, fileName) to Auth service via RabbitMQ
- Auth service stores metadata in PostgreSQL
- Return success response
This pattern keeps services decoupled—the API Gateway doesn't need direct database access.
6. Production Best Practices
Security
- Use SAS Tokens: For temporary public access, generate Shared Access Signatures instead of making containers public
- Validate File Types: Check MIME types and file extensions before upload
- Size Limits: Configure max file size in NestJS (default is 1MB)
- Access Control: Use Azure RBAC to limit who can upload/delete blobs
Performance
- Stream, Don't Buffer: Always use
readableStreamBodyfor downloads - CDN Integration: Enable Azure CDN for frequently accessed files
- Concurrent Uploads: Use
Promise.all()for batch uploads
Error Handling
async upload(file: Express.Multer.File, containerName: string): Promise {
try {
const uniqueFileName = uuidv4() + file.originalname;
const blobClient = this.getBlobClient(uniqueFileName, containerName);
await blobClient.uploadData(file.buffer);
return blobClient.url;
} catch (error) {
console.error('Azure upload failed:', error);
throw new HttpException(
'Failed to upload file to storage',
HttpStatus.INTERNAL_SERVER_ERROR
);
}
}
Conclusion
Building a production-ready file management system isn't just about uploading files—it's about doing it right. Azure Blob Storage gives you enterprise-grade infrastructure without the operational overhead, and NestJS provides the perfect framework to build scalable, maintainable APIs on top of it.
The patterns we've covered solve real production challenges:
- UUID-based filenames eliminate race conditions and collisions in high-traffic systems
- Streaming downloads keep your memory footprint constant, whether you're serving 1MB or 100MB files
- Microservices decoupling lets you scale file storage independently from your business logic
- URL decoding prevents those subtle bugs that only appear with special characters in production
This architecture scales. Whether you're handling 100 PDFs for a small SaaS or millions of documents for an enterprise platform, these patterns remain the same. The beauty of cloud object storage is that you don't redesign when you scale—you just pay for more storage.
Start simple: one container, one service, basic upload/download. Add SAS tokens when you need temporary access. Integrate CDN when latency matters. Layer on complexity only when you need it. That's how you build systems that last.
Written by

Technical Lead and Full Stack Engineer leading a 5-engineer team at Fygurs (Paris, Remote) on Azure cloud-native SaaS. Graduate of 1337 Coding School (42 Network / UM6P). Writes about architecture, cloud infrastructure, and engineering leadership.