Skip to main content

Running Cluster of Node.js Processes on Multi-Cores System

By default, a Node.js program runs as a single-thread application. For utilizing all available cores in your system, you can use the cluster module in Node.js. This module can fork specified processes in your program into several processes that run on the desired number of available cores.

For example, you have a server that accepts a client request, generates random characters, and sends back a response. For the purpose of increasing the load, the random characters generator will redo its process 10,000 times. Then, you will make 100 requests to the server in a time to force your server application for spawning new handler processes.

1. Create this sample server program server.js.

const http = require('http');
const cluster = require('cluster');
const coreNumber = require('os').cpus().length;

// define server object
const server = http.createServer(function(req, res){
  console.log(`Handled by PID ${process.pid}.`);

  // do expensive computation
  let allowedChars = 'abcdefghijklmnopqrstuvwxyz0123456789';
  let str;
  for (let i=0; i<10000; i++) {
    str = '';
    for (let j=0; j<100; j++) {
      str += allowedChars.charAt( Math.floor(Math.random() * 36) );
    }
  }

  res.setHeader('Content-Type', 'application/json');
  res.writeHead(200);
  res.end(JSON.stringify({message:'Welcome to Hello API', code:str}));
});

// server initiation function
function init() {
  server.listen(3000, ()=>{
    console.log(`Server is running on port 3000 with PID ${process.pid}`);
  });
}

// initiate cluster
if (cluster.isMaster) { // first process

  // spawn new processes as many as core numbers
  for (let i=0; i<coreNumber; i++) {
    cluster.fork();
  }

  cluster.on('exit', (worker)=>{
    console.log(`A server worker with PID ${worker.process.pid} is died.`);
  });
} else { // forked processes
  // initiate server
  init();
}

Even the program initiates the server's process multiple times, all processes can listen to the same port because it is managed by the cluster module.

2. Create a client program client.js that sends multiple requests to the server.

const http = require('http');

const requestOptions = {
  protocol: 'http:',
  hostname: 'localhost',
  path: '/',
  method: 'GET',
  port: 3000
};

for (let i = 0; i<100; i++) {
  const request = http.request(requestOptions, (res)=>{
    console.log(`Request status code: ${res.statusCode}`);
  });

  request.end();
}

3. Run the server.

node server.js

4. Run the client.

node client.js

Comments

Popular posts from this blog

Rangkaian Sensor Infrared dengan Photo Dioda

Keunggulan photodioda dibandingkan LDR adalah photodioda lebih tidak rentan terhadap noise karena hanya menerima sinar infrared, sedangkan LDR menerima seluruh cahaya yang ada termasuk infrared. Rangkaian yang akan kita gunakan adalah seperti gambar di bawah ini. Pada saat intensitas Infrared yang diterima Photodiode besar maka tahanan Photodiode menjadi kecil, sedangkan jika intensitas Infrared yang diterima Photodiode kecil maka tahanan yang dimiliki photodiode besar. Jika  tahanan photodiode kecil  maka tegangan  V- akan kecil . Misal tahanan photodiode mengecil menjadi 10kOhm. Maka dengan teorema pembagi tegangan: V- = Rrx/(Rrx + R2) x Vcc V- = 10 / (10+10) x Vcc V- = (1/2) x 5 Volt V- = 2.5 Volt Sedangkan jika  tahanan photodiode besar  maka tegangan  V- akan besar  (mendekati nilai Vcc). Misal tahanan photodiode menjadi 150kOhm. Maka dengan teorema pembagi tegangan: V- = Rrx/(Rrx + R2) x Vcc V- = 150 / (150+10) x Vcc V- = (150/160) x 5

Configuring Swap Memory on Ubuntu Using Ansible

If we maintain a Linux machine with a low memory capacity while we are required to run an application with high memory consumption, enabling swap memory is an option. Ansible can be utilized as a helper tool to automate the creation of swap memory. A swap file can be allocated in the available storage of the machine. The swap file then can be assigned as a swap memory. Firstly, we should prepare the inventory file. The following snippet is an example, you must provide your own configuration. [server] 192.168.1.2 [server:vars] ansible_user=root ansible_ssh_private_key_file=~/.ssh/id_rsa Secondly, we need to prepare the task file that contains not only the tasks but also some variables and connection information. For instance, we set /swapfile  as the name of our swap file. We also set the swap memory size to 2GB and the swappiness level to 60. - hosts: server become: true vars: swap_vars: size: 2G swappiness: 60 For simplicity, we only check the exi

Beautiful Rain (JDorama)

Saya selalu tertarik dengan film-film inspirasional, baik movie atau pun serial drama. Akhir-akhir ini saya tertarik untuk menonton drama serial jepang. Saya googling dengan keyword "inspirational japan dorama" kemudian saya dapati sejumlah review  beberapa film bagus dari sejumlah netizen.  Salah satu yang kemudian saya tonton adalah Beautiful Rain . Setiap episode film ini selalu membuat saya sangat terharu sampai meneteskan air mata. :' Yah, ini mungkin saja karena saya yang terlalu melankolis. Hahaha. Ini sedikit review dari saya tentang film ini.

API Gateway Using KrakenD

The increasing demands of users for high-quality web services create the need to integrate various technologies into our application. This will cause the code base to grow larger, making maintenance more difficult over time. A microservices approach offers a solution, where the application is built by combining multiple smaller services, each with a distinct function. For example, one service handles authentication, another manages business functions, another maintains file uploads, and so on. These services communicate and integrate through a common channel. On the client side, users don't need to understand how the application is built or how it functions internally. They simply send a request to a single endpoint, and processes like authentication, caching, or database querying happen seamlessly. This is where an API gateway is effective. It handles user requests and directs them to the appropriate handler. There are several tools available for building an API gateway, su

Deliver SaaS According Twelve-Factor App

If you haven't heard of  the twelve-factor app , it gives us a recommendation or a methodology for developing SaaS or web apps structured into twelve items. The recommendation has some connections with microservice architecture and cloud-native environments which become more popular today. We can learn the details on its website . In this post, we will do a quick review of the twelve points. One Codebase Multiple Deployment We should maintain only one codebase for our application even though the application may be deployed into multiple environments like development, staging, and production. Having multiple codebases will lead to any kinds of complicated issues. Explicitly State Dependencies All the dependencies for running our application should be stated in the project itself. Many programming languages have a kind of file that maintains a list of the dependencies like package.json in Node.js. We should also be aware of the dependencies related to the pla

Manage Kubernetes Cluster using Rancher

Recently, I sought a simpler method to deploy and maintain Kubernetes clusters across various cloud providers. The goal was to use it for development purposes with the ability to manage the infrastructure and costs effortlessly. After exploring several options, I decided to experiment with Rancher. Rancher offers a comprehensive software stack for teams implementing container technology. It tackles both the operational and security hurdles associated with managing numerous Kubernetes clusters. Additionally, it equips DevOps teams with integrated tools essential for managing containerized workloads. Rancher also offers an open-source version, allowing free deployment within one's infrastructure. The Rancher platform can be deployed either as a Docker container or within a Kubernetes cluster utilizing the K3s engine. We can read the documentation on how to install Rancher on K3s using Helm . Rancher itself enables the creation and provisioning of Kubernetes clusters and