Architecture

Microservices with Node.js: How I Built a System Handling 50K+ Users

March 27, 2025
22 min
Microservices with Node.js: How I Built a System Handling 50K+ Users

Microservices with Node.js: How I Built a System Handling 50K+ Users

This isn't another theoretical microservices article. This is the exact architecture I used to build CommerceX — a platform that handles 50K+ concurrent users with 99.9% uptime and sub-200ms response times.

[Hero Image: Microservices Architecture]

Table of Contents

  1. Why I Chose Microservices (And When I Wouldn't)
  2. The Architecture That Handles 50K+ Users
  3. Service Design Principles
  4. Inter-Service Communication (Sync vs Async)
  5. Event-Driven Architecture with Kafka
  6. Data Management: Database Per Service
  7. API Gateway Pattern
  8. Authentication Across Services
  9. Distributed Logging & Monitoring
  10. Docker & Deployment Strategy
  11. The Migration: Monolith → Microservices
  12. Lessons Learned (The Hard Way)

1. Why I Chose Microservices (And When I Wouldn't)

Let me be clear: microservices are NOT always the answer.

For my first two years as a developer, I built everything as monoliths. And honestly? That was the right call. Here's my decision framework:

The Decision Matrix

| Factor | Monolith | Microservices | |--------|----------|---------------| | Team size | 1-5 developers | 5+ developers | | Expected users | <10K | 10K+ | | Development speed needed | Fast MVP | Long-term scalability | | Deployment complexity tolerance | Low | High | | Domain complexity | Simple | Complex with bounded contexts | | Budget | Limited | Sufficient for infrastructure |

For CommerceX, the math was clear:

  • Multiple bounded contexts (orders, payments, inventory, users)
  • Expected scale of 50K+ users
  • Team of 5+ developers working on different features
  • Each service needed to scale independently

So I went microservices. Here's exactly how...

[Continue with full technical deep dive]


5. Event-Driven Architecture with Kafka

This is where microservices get REALLY powerful.

The Problem with Synchronous Communication

Order Service → [HTTP] → Payment Service → [HTTP] → Inventory Service ↓ If Inventory Service is DOWN, the entire chain FAILS āŒ

The Solution: Event-Driven with Kafka

Order Service → publishes "OrderCreated" → Kafka Topic │ ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” ā–¼ ā–¼ ā–¼ Payment Service Inventory Service Notification Service (subscribes) (subscribes) (subscribes)

If any service is DOWN, events are QUEUED and processed when it recovers āœ…

Implementation

// services/order/src/events/publisher.ts
import { Kafka, Partitioners } from 'kafkajs'
 
const kafka = new Kafka({
  clientId: 'order-service',
  brokers: [process.env.KAFKA_BROKER!],
  ssl: true,
  sasl: {
    mechanism: 'scram-sha-256',
    username: process.env.KAFKA_USERNAME!,
    password: process.env.KAFKA_PASSWORD!,
  },
})
 
const producer = kafka.producer({
  createPartitioner: Partitioners.LegacyPartitioner,
  idempotent: true,                     // Exactly-once delivery
  maxInFlightRequests: 5,
  retry: { retries: 5 },
})
 
export async function publishOrderCreated(order: Order) {
  await producer.send({
    topic: 'orders.created',
    messages: [{
      key: order.id,                     // Partition by order ID
      value: JSON.stringify({
        eventId: crypto.randomUUID(),
        eventType: 'ORDER_CREATED',
        timestamp: new Date().toISOString(),
        data: {
          orderId: order.id,
          userId: order.userId,
          items: order.items,
          totalAmount: order.totalAmount,
        },
        metadata: {
          source: 'order-service',
          version: '1.0',
          correlationId: order.correlationId,
        },
      }),
      headers: {
        'event-type': 'ORDER_CREATED',
        'content-type': 'application/json',
      },
    }],
  })
}
// services/payment/src/events/consumer.ts
const consumer = kafka.consumer({
  groupId: 'payment-service',
  sessionTimeout: 30000,
  heartbeatInterval: 3000,
})
 
await consumer.subscribe({
  topics: ['orders.created'],
  fromBeginning: false,
})
 
await consumer.run({
  eachMessage: async ({ topic, partition, message }) => {
    const event = JSON.parse(message.value!.toString())
 
    try {
      // Idempotency check
      const processed = await redis.get(`event:${event.eventId}`)
      if (processed) {
        logger.info(`Event ${event.eventId} already processed, skipping`)
        return
      }
 
      // Process payment
      const payment = await processPayment({
        orderId: event.data.orderId,
        userId: event.data.userId,
        amount: event.data.totalAmount,
      })
 
      // Mark as processed
      await redis.set(`event:${event.eventId}`, 'processed', 'EX', 86400)
 
      // Publish payment result
      await publishPaymentProcessed(payment)
 
      logger.info(`Payment processed for order ${event.data.orderId}`)
 
    } catch (error) {
      logger.error(`Failed to process payment`, {
        eventId: event.eventId,
        error: error.message,
      })
 
      // Dead letter queue for failed messages
      await publishToDeadLetterQueue(topic, message, error)
    }
  },
})

[Continue with remaining sections]

Jenil Rupapara

About Me

I'm a Senior MERN Stack Developer specializing in scalable web applications, microservices architecture, and high-performance system design. I focus on building ROI-driven solutions for global SaaS startups and enterprise-grade systems.

šŸ“š Related Articles

Scalable Systems?
Let's Build Them.

I help companies build high-performance MERN applications that scale to millions.

Let's Talk šŸš€