Maxing out Concurrent Websocket Connections on Node.js

Recently, I was faced with the challenge of getting as many concurrent connections as possible out of a single node.js instance running Express and

During the exploration, it was clear that there are a lot of bits and peices that need to be configured in order to get the most out of your application. These findings are documented below in hopes that they can be useful to your project.

Node.js Configurations

A the node js layer, the following configurations were made:

1. Schedule the garbage collector manually

If you do not run your garbage collector manually, you may run into issues where connections can be paused. While this is likely not going to cause downtime, it will increase the latency of your app when facing a large amount of concurrent connections.

To run the garbage collection manually, be sure to initialize your app with the --expose-gc flag. For example:

node --expose-gc ./index.js

This will give you access to the global.gc variable which you can use to manually call your garbage collection. The following code works for me when inserted at the start of the process:

function scheduleGc() {
  if (!global.gc) {
    console.log('Garbage collection is not exposed');

  let nextCall = 30 + (Math.random() * 15);

  setTimeout(() => {
  }, nextCall * 1000);


This will randomly run garbage collection between 30 and 45 seconds in an infinite loop and avoid any long periods of downtime when facing a large number of concurrent connections.

2. Disable idle garbage collection & increase heap size

You will want to use the --nouse-idle-notification and -–max-old-space-size=8192 flags. These optimizations will deactivate the idle garbage collection process and increase the heap memory for each node process to 8GB. This will give you initilization code that looks like this:

node --nouse-idle-notification --expose-gc --max-old-space-size=8192 ./index.js

3. Use the cluster module

By default, Node.js will run a single process. This is fine if you only have one CPU, but if you want to get the most out of your machine it is necessary for you to spawn multiple processes. In order to do this, use the cluster module in your node.js application.

When using express and your code will look something like this:

const cluster = require('cluster');

if (cluster.isMaster) {
  const numCPUs = require('os').cpus().length;

  console.log('Master cluster setting up ' + numCPUs + ' workers...');

  for(var i = 0; i < numCPUs; i++) {

  cluster.on('online', function(worker) {
    console.log('Worker ' + + ' is online');

  cluster.on('exit', function(worker, code, signal) {
    console.log('Worker ' + + ' died with code: ' + code + ', and signal: ' + signal);
    console.log('Starting a new worker');

} else {



Note: Using the cluster module means you will need to make sure your servers are using the redis adapter package to keep in sync.

Socket.Io Configurations

1. Use the redis adapter

Since you are using multiple processes, you must make sure that your server is equipped to communicate between all of them. This is made possible by using the redis-adapter and having a seperate redis server set up.

2. Set perMessageDeflate to false is a wonderful tool, but by default it will exhaust memory and never reclaim it due to this setting. This issue can be resolved by setting perMessageDeflate to false in initialization (There is an ongoing discussion about why it is set by default). More information about perMessageDeflate can be found here. Your init code will now look something like this:

const io = require('')(server, { perMessageDeflate: false });

Ubuntu Configurations

You will need to make some updates to your server configuration in order to allow for a very high number of concurrent connections. I am not primarily a sys-admin so my description on what these commands do is a bit light. That being said, you can look up all of them for more information.

Increase the max open file limit by entering the following in your server's shell:

ulimit -n 1000000

Add the following to /etc/security/limits.d/custom.conf

root soft nofile 1000000
root hard nofile 1000000
* soft nofile 1000000
* hard nofile 1000000

Add the following to /etc/sysctl.conf

fs.file-max = 1000000
fs.nr_open = 1048576
net.ipv4.netfilter.ip_conntrack_max = 1048576
net.nf_conntrack_max = 1048576
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.ipv4.tcp_rmem = 4096 16384 33554432
net.ipv4.tcp_wmem = 4096 16384 33554432
net.ipv4.tcp_mem = 786432 1048576 26777216
net.ipv4.tcp_max_tw_buckets = 360000
net.core.netdev_max_backlog = 2500
vm.min_free_kbytes = 65536
vm.swappiness = 0
net.ipv4.ip_local_port_range = 1024 65535

Reload your settings with this command:

sysctl -p

Update your nginx settings, open /etc/nginx/nginx.conf and make sure the top of the config looks like the following:

user www-data;
worker_processes auto;
pid /run/;
include /etc/nginx/modules-enabled/*.conf;
worker_rlimit_nofile 65535;

events {
        worker_connections 65535;
        multi_accept on;
        use epoll;

Then reload nginx with: service nginx reload .

Note: Nginx was a requirement for my project. While nginx is a very useful tool, you have a hard limit of 65,535 concurrent connections when using nginx. This is likely sufficient for your use case, but if you expect more connections on your service, you should remove nginx and increase the file limits on the commands listed above.

So let's do this.

Try Zigpoll and get the most out of your visitors.

Questions or Feedback?

We would love to hear from you.