Microservices with Node.js: How I Built a System Handling 50K+ Users

Microservices with Node.js: How I Built a System Handling 50K+ Users
This isn't another theoretical microservices article. This is the exact architecture I used to build CommerceX ā a platform that handles 50K+ concurrent users with 99.9% uptime and sub-200ms response times.
[Hero Image: Microservices Architecture]
Table of Contents
- Why I Chose Microservices (And When I Wouldn't)
- The Architecture That Handles 50K+ Users
- Service Design Principles
- Inter-Service Communication (Sync vs Async)
- Event-Driven Architecture with Kafka
- Data Management: Database Per Service
- API Gateway Pattern
- Authentication Across Services
- Distributed Logging & Monitoring
- Docker & Deployment Strategy
- The Migration: Monolith ā Microservices
- Lessons Learned (The Hard Way)
1. Why I Chose Microservices (And When I Wouldn't)
Let me be clear: microservices are NOT always the answer.
For my first two years as a developer, I built everything as monoliths. And honestly? That was the right call. Here's my decision framework:
The Decision Matrix
| Factor | Monolith | Microservices | |--------|----------|---------------| | Team size | 1-5 developers | 5+ developers | | Expected users | <10K | 10K+ | | Development speed needed | Fast MVP | Long-term scalability | | Deployment complexity tolerance | Low | High | | Domain complexity | Simple | Complex with bounded contexts | | Budget | Limited | Sufficient for infrastructure |
For CommerceX, the math was clear:
- Multiple bounded contexts (orders, payments, inventory, users)
- Expected scale of 50K+ users
- Team of 5+ developers working on different features
- Each service needed to scale independently
So I went microservices. Here's exactly how...
[Continue with full technical deep dive]
5. Event-Driven Architecture with Kafka
This is where microservices get REALLY powerful.
The Problem with Synchronous Communication
Order Service ā [HTTP] ā Payment Service ā [HTTP] ā Inventory Service ā If Inventory Service is DOWN, the entire chain FAILS ā
The Solution: Event-Driven with Kafka
Order Service ā publishes "OrderCreated" ā Kafka Topic ā āāāāāāāāāāāāāāāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāāāāāāā ā¼ ā¼ ā¼ Payment Service Inventory Service Notification Service (subscribes) (subscribes) (subscribes)
If any service is DOWN, events are QUEUED and processed when it recovers ā
Implementation
// services/order/src/events/publisher.ts
import { Kafka, Partitioners } from 'kafkajs'
const kafka = new Kafka({
clientId: 'order-service',
brokers: [process.env.KAFKA_BROKER!],
ssl: true,
sasl: {
mechanism: 'scram-sha-256',
username: process.env.KAFKA_USERNAME!,
password: process.env.KAFKA_PASSWORD!,
},
})
const producer = kafka.producer({
createPartitioner: Partitioners.LegacyPartitioner,
idempotent: true, // Exactly-once delivery
maxInFlightRequests: 5,
retry: { retries: 5 },
})
export async function publishOrderCreated(order: Order) {
await producer.send({
topic: 'orders.created',
messages: [{
key: order.id, // Partition by order ID
value: JSON.stringify({
eventId: crypto.randomUUID(),
eventType: 'ORDER_CREATED',
timestamp: new Date().toISOString(),
data: {
orderId: order.id,
userId: order.userId,
items: order.items,
totalAmount: order.totalAmount,
},
metadata: {
source: 'order-service',
version: '1.0',
correlationId: order.correlationId,
},
}),
headers: {
'event-type': 'ORDER_CREATED',
'content-type': 'application/json',
},
}],
})
}// services/payment/src/events/consumer.ts
const consumer = kafka.consumer({
groupId: 'payment-service',
sessionTimeout: 30000,
heartbeatInterval: 3000,
})
await consumer.subscribe({
topics: ['orders.created'],
fromBeginning: false,
})
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
const event = JSON.parse(message.value!.toString())
try {
// Idempotency check
const processed = await redis.get(`event:${event.eventId}`)
if (processed) {
logger.info(`Event ${event.eventId} already processed, skipping`)
return
}
// Process payment
const payment = await processPayment({
orderId: event.data.orderId,
userId: event.data.userId,
amount: event.data.totalAmount,
})
// Mark as processed
await redis.set(`event:${event.eventId}`, 'processed', 'EX', 86400)
// Publish payment result
await publishPaymentProcessed(payment)
logger.info(`Payment processed for order ${event.data.orderId}`)
} catch (error) {
logger.error(`Failed to process payment`, {
eventId: event.eventId,
error: error.message,
})
// Dead letter queue for failed messages
await publishToDeadLetterQueue(topic, message, error)
}
},
})[Continue with remaining sections]

š Related Articles
Architecture
Kafka vs RabbitMQ in 2025: A Developer's Guide to Choosing the Right Message Broker
An in-depth comparison of Apache Kafka and RabbitMQ for Node.js developers. Performance benchmarks, architecture differences, and real-world use cases to help you choose the right message broker.
Read more āArchitecture
REST vs GraphQL: When to Use Each (A Developer Who Uses Both Daily)
An honest, practical comparison of REST and GraphQL from a developer who uses both in production. Real-world scenarios, performance implications, and decision framework.
Read more āPerformance
How I Improved API Response Time by 300%: A Step-by-Step Case Study
A detailed case study of how I optimized a Node.js API from 1.2s to 0.3s response time. Includes profiling techniques, database optimization, caching strategies, and before/after benchmarks.
Read more āScalable Systems?
Let's Build Them.
I help companies build high-performance MERN applications that scale to millions.
Let's Talk š