image title

Concurrency Design Patterns – Managing Multiple Tasks Smoothly

Concurrency patterns help you design systems that handle multiple tasks at once—whether that’s parallel processing, asynchronous jobs, or coordinating different workers. These patterns make your apps more responsive, scalable, and reliable.

Here’s a quick overview of the key concurrency patterns covered in this article:

  • Queue (Producer–Consumer): Separates task producers and consumers using a queue, allowing asynchronous processing and workload balancing.
  • Thread Pool: Uses a fixed set of worker threads to execute multiple tasks efficiently, avoiding the overhead of creating new threads.
  • Reactor: Handles service requests concurrently by dispatching events to registered handlers; foundational for Node.js event loop.
  • Future / Promise: Represents a value that will be available in the future, allowing non-blocking asynchronous operations.
  • Saga Pattern: Manages long-running distributed transactions by executing steps independently and using compensating actions on failure.

These patterns help manage multiple tasks efficiently, improve responsiveness, and keep your system scalable and maintainable. They are especially useful in Node.js and other asynchronous, event-driven environments.

#Queue (Producer–Consumer) Pattern

The Queue pattern, often called the Producer–Consumer pattern, is one of the most common concurrency patterns.
It separates the tasks being created (producers) from the tasks being processed (consumers) by using a queue as a buffer between them.

Think of it like a restaurant kitchen: waiters (producers) take orders and place them on the order board (queue), while chefs (consumers) pick up orders from the board and cook them. This way, waiters don’t wait for the cooking to finish before taking the next order.

This pattern helps balance workloads and prevent bottlenecks when task production and processing happen at different speeds.

javascript : Simple Producer–Consumer in Node.js
1const queue = [];
2let processing = false;
3
4function produce(item) {
5  console.log(`Produced: ${item}`);
6  queue.push(item);
7  processQueue();
8}
9
10function consume(item) {
11  return new Promise(resolve => {
12    setTimeout(() => {
13      console.log(`Consumed: ${item}`);
14      resolve();
15    }, 1000);
16  });
17}
18
19async function processQueue() {
20  if (processing) return;
21  processing = true;
22
23  while (queue.length > 0) {
24    const item = queue.shift();
25    await consume(item);
26  }
27
28  processing = false;
29}
30
31// Usage
32produce('Task 1');
33produce('Task 2');
34produce('Task 3');
35

Pros:

  • Decouples producers from consumers, allowing them to work at different speeds.
  • Prevents producers from overloading consumers by buffering tasks.
  • Makes scaling easier — you can add more consumers to handle more work.

Cons:

  • Requires managing the queue and ensuring thread/process safety in multi-threaded environments.
  • Increases complexity due to extra components (queue management, error handling).

Real-World Usage Cases:

  • Background job processing (e.g., sending emails, processing images).
  • Message queues like RabbitMQ, Kafka, or AWS SQS.
  • Logging systems that buffer logs before writing.
  • Asynchronous task scheduling in Node.js backends.

When to Use:

  • When producers and consumers work at different speeds.
  • When you want to avoid blocking the producer while waiting for tasks to complete.

When to Avoid:

  • When tasks must be processed immediately without any delay.
  • When system complexity must be kept minimal.

#Thread Pool Pattern

The Thread Pool pattern reuses a fixed set of worker threads to execute multiple tasks.
Instead of creating a new thread for every task (which can be expensive), you keep a pool of threads ready, and tasks are assigned to them as needed.

Think of it like a delivery company with a fixed number of drivers. New delivery requests don’t hire a new driver each time — they wait until a driver is available. This approach avoids the overhead of constantly creating and destroying threads.

In JavaScript (Node.js), while we don’t have traditional threads for JS code, the concept applies to worker threads and libuv’s thread pool for I/O and CPU-intensive tasks.

javascript : Simple Thread Pool with Node.js Worker Threads
1const { Worker } = require('worker_threads');
2
3class ThreadPool {
4  constructor(size) {
5    this.size = size;
6    this.workers = [];
7    this.queue = [];
8    this.activeWorkers = 0;
9
10    for (let i = 0; i < size; i++) {
11      this.workers.push(new Worker('./worker.js'));
12    }
13  }
14
15  runTask(taskData) {
16    return new Promise((resolve, reject) => {
17      const worker = this.workers.pop();
18      if (worker) {
19        this.activeWorkers++;
20        worker.once('message', result => {
21          this.workers.push(worker);
22          this.activeWorkers--;
23          this._next();
24          resolve(result);
25        });
26        worker.postMessage(taskData);
27      } else {
28        this.queue.push({ taskData, resolve, reject });
29      }
30    });
31  }
32
33  _next() {
34    if (this.queue.length > 0 && this.workers.length > 0) {
35      const { taskData, resolve, reject } = this.queue.shift();
36      this.runTask(taskData).then(resolve).catch(reject);
37    }
38  }
39}
40
41// Example worker.js file
42// const { parentPort } = require('worker_threads');
43// parentPort.on('message', data => {
44//   let sum = 0;
45//   for (let i = 0; i < data; i++) sum += i;
46//   parentPort.postMessage(sum);
47// });
48
49// Usage
50(async () => {
51  const pool = new ThreadPool(2);
52
53  const results = await Promise.all([
54    pool.runTask(1e7),
55    pool.runTask(1e7),
56    pool.runTask(1e7),
57  ]);
58
59  console.log(results);
60})();
61

Pros:

  • Avoids the overhead of creating/destroying threads for each task.
  • Controls concurrency by limiting the number of active workers.
  • Can improve performance for CPU-bound tasks.

Cons:

  • Requires careful queue management.
  • Fixed pool size can become a bottleneck if too small or waste resources if too large.
  • In Node.js, limited use for pure JS logic (since it’s single-threaded), but helpful for CPU-heavy tasks with worker threads.

Real-World Usage Cases:

  • Handling multiple parallel API requests without flooding the server.
  • Processing large datasets in chunks using worker threads.
  • Image/video processing pipelines.
  • Game servers that process player actions in parallel.

When to Use:

  • When tasks are CPU-intensive or involve blocking operations.
  • When creating a thread per task is too costly.

When to Avoid:

  • For lightweight, short-lived tasks that are better handled asynchronously.
  • When the workload varies wildly and a fixed-size pool is inefficient.

#Reactor Pattern

The Reactor pattern is a design pattern for handling service requests delivered concurrently to an application by one or more inputs.
It works by registering event handlers for different types of events and dispatching them when events occur.
Think of it like a restaurant host: whenever a customer arrives, the host signals the correct waiter to serve them — the host doesn’t serve themselves, they just dispatch the task to the right handler.

In Node.js, the Reactor pattern is at the core of the event loop, handling asynchronous I/O efficiently without blocking the main thread.

javascript : Simple Reactor Pattern Example with EventEmitter
1const EventEmitter = require('events');
2
3class Reactor extends EventEmitter {
4  handleEvent(event, data) {
5    this.emit(event, data);
6  }
7}
8
9// Usage
10const reactor = new Reactor();
11
12// Register handlers
13reactor.on('dataReceived', data => {
14  console.log('Handling data:', data);
15});
16
17reactor.on('error', err => {
18  console.error('Handling error:', err);
19});
20
21// Simulate events
22reactor.handleEvent('dataReceived', { id: 1, value: 'Hello' });
23reactor.handleEvent('error', new Error('Something went wrong'));
24

Pros:

  • Handles multiple events concurrently without blocking.
  • Decouples event dispatching from event handling.
  • Efficient for I/O-bound applications with many simultaneous connections.

Cons:

  • Requires careful handling of state in asynchronous callbacks.
  • Complex error handling can become tricky.
  • Can be less intuitive for beginners used to sequential flow.

Real-World Usage Cases:

  • Node.js core event loop and asynchronous I/O.
  • Web servers handling multiple client connections.
  • GUI frameworks where events are dispatched to registered handlers.
  • Network applications like chat servers or streaming services.

When to Use:

  • For I/O-heavy applications where you want to handle many events concurrently.
  • When you want to decouple event detection from event handling.

When to Avoid:

  • For simple, single-threaded applications with minimal events.
  • When tasks are CPU-bound, as the event loop can become blocked.

#Future / Promise Pattern

The Future (or Promise) pattern represents a value that may not be available yet but will be at some point in the future.
It allows your code to continue running while waiting for long-running operations, like fetching data from a server.
Think of it like ordering a pizza: you don’t wait by the counter doing nothing — you get a receipt (the promise) and continue with other tasks. When the pizza is ready, you get notified.

In JavaScript, Promises are the standard way to implement this pattern.

javascript : Promise Pattern Example
1function fetchData(url) {
2  return new Promise((resolve, reject) => {
3    setTimeout(() => {
4      if (url) {
5        resolve({ data: `Data from ${url}` });
6      } else {
7        reject(new Error('Invalid URL'));
8      }
9    }, 1000);
10  });
11}
12
13// Usage
14fetchData('https://example.com')
15  .then(response => {
16    console.log('Received:', response.data);
17  })
18  .catch(error => {
19    console.error('Error:', error.message);
20  });
21
22// Using async/await
23(async () => {
24  try {
25    const result = await fetchData('https://example.com');
26    console.log('Async/Await received:', result.data);
27  } catch (err) {
28    console.error('Async/Await error:', err.message);
29  }
30})();
31

Pros:

  • Allows non-blocking code execution.
  • Improves readability with .then/.catch or async/await.
  • Makes error handling for asynchronous operations more consistent.

Cons:

  • Can be confusing for beginners, especially with chaining or nested promises.
  • Requires careful error handling to avoid unhandled rejections.

Real-World Usage Cases:

  • HTTP requests in web applications.
  • Database queries in Node.js.
  • File system or network I/O operations.
  • Any asynchronous task where the result is not immediately available.

When to Use:

  • For asynchronous operations that return results in the future.
  • When you want to write non-blocking, readable code.

When to Avoid:

  • For simple synchronous tasks — Promises add unnecessary complexity.
  • When performance is critical and async overhead is undesirable.

#Saga Pattern

The Saga pattern is used to manage long-running distributed transactions by splitting them into a series of smaller, independent steps.
Each step executes a local transaction and publishes an event or triggers the next step. If a step fails, compensating transactions undo the previous work.

Think of it like booking a trip: you reserve a flight, then a hotel, then a rental car. If the hotel booking fails, you cancel the flight reservation — each step is independent, but the overall process must remain consistent.

This pattern helps maintain eventual consistency across multiple services without using distributed locks.

javascript : Simple Saga Pattern Example
1class Saga {
2  constructor() {
3    this.steps = [];
4  }
5
6  addStep(step, compensation) {
7    this.steps.push({ step, compensation });
8    return this;
9  }
10
11  async execute() {
12    const completed = [];
13    try {
14      for (const { step, compensation } of this.steps) {
15        await step();
16        completed.push(compensation);
17      }
18      console.log('Saga completed successfully');
19    } catch (err) {
20      console.error('Saga failed, executing compensations');
21      for (const compensation of completed.reverse()) {
22        await compensation();
23      }
24    }
25  }
26}
27
28// Usage
29const saga = new Saga();
30
31saga
32  .addStep(
33    async () => { console.log('Booking flight'); },
34    async () => { console.log('Canceling flight'); }
35  )
36  .addStep(
37    async () => { console.log('Booking hotel'); },
38    async () => { console.log('Canceling hotel'); }
39  )
40  .addStep(
41    async () => { throw new Error('Car rental failed'); },
42    async () => { console.log('Canceling car rental'); }
43  );
44
45saga.execute();
46

Pros:

  • Enables distributed transactions without global locks.
  • Supports compensating actions for failure recovery.
  • Improves system responsiveness by allowing independent steps to run concurrently.

Cons:

  • More complex than traditional transactions.
  • Requires careful design of compensating actions.
  • Debugging distributed sagas can be challenging.

Real-World Usage Cases:

  • Booking systems (flights, hotels, rentals).
  • E-commerce order processing across multiple services.
  • Payment processing with multiple financial systems.
  • Any workflow that requires eventual consistency across microservices.

When to Use:

  • When transactions span multiple services that cannot share a single database transaction.
  • When you need to allow concurrent execution of independent steps with rollback capability.

When to Avoid:

  • For simple, single-database transactions where ACID guarantees suffice.
  • When compensating actions are difficult or impossible to define.

Random things I built,
develop
and care about

Because spending countless hours debugging and perfecting something no one asked for is definitely my idea of fun. 🤷🏻‍♂️