4

I am running node v0.12.7 on Debian 7 x64 and I would like to benchmark its performance on a 16 core virtual private server with 4GB of memory.

I am running the following minimalistic code:

// server.js
var http = require('http');
http.createServer(function (request, response) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  response.end('Hello World');
}).listen(80,"0.0.0.0");

I am launching it with this command in another teminal):

node server.js

At this point node is running on one core. Then I use ab to test its performance with this command:

ab -n 10000 -c 100 -k http://127.0.0.1/

... and get these results:

...
Requests per second:    3925.81 [#/sec] (mean)
...
Percentage of the requests served within a certain time (ms)
50%     35
...

I was wondering if any of you did similar tests and if you:

  • got better results
  • got same kind of results but were able to tweak your node app/server in order to obtain higher requests/s and/or lower latency.

I have to mention that running it with pm2 in cluster mode with 15 cores brings me to 4500 requests / second which makes me think that there is another bottle neck somewhere that I miss.

Thanks for any thoughts on this topic. Paul

2
  • FYI, 3925 requests/sec means each request is taking no more than 0.2548ms of CPU for each request (that's pretty quick). Keep in mind this is to accept a TCP connection, run through the TCP handshake to establish the socket, receive the actual HTTP request, parse it, send the response, then close the connection. Clustering will make more of a difference when there's more processing to each request. Commented Nov 13, 2015 at 0:23
  • Hi, thanks for your reply. I agree with all the above but some people mention 13k requests per second on a single thread with i7 and no specific configuration. On my mac (with i7) I am also around 4,5k / second. I have the feeling that the differences of performance seem to important to be only based on the hardware. I also checked my config and I can open 65k file descriptors so the problem does not seem to come form there. Commented Nov 13, 2015 at 12:59

2 Answers 2

5

Posting my results (hope it can benefit someone else):

On MacBook Pro 2.2 GHz, Core I7, 16 GB. The test is done using JMeter. ab was freezing and throwing some errors after processing 15K reqs.

Number Of Users: 200 Each user makes 5000 requests.

Result: Without Node Clustering:

Total Samples Processed: 1000000
Throughput: 22419 req/sec
Total Time: 44 seconds

Result: with Clustering - 8 Nodes (=numCpus)

Total Samples Processed: 1000000
Throughput: 36091 req/sec
Total Time: 27 seconds

Result: with Clustering - 6 Nodes (3:1 of total cpus)

Total Samples Processed: 1000000
Throughput: 36685 req/sec
Total Time: 27 seconds

Result: with Clustering - 4 Nodes (=numCpus/2)

Total Samples Processed: 1000000
Throughput: 35604 req/sec
Total Time: 28 seconds

Its really great. The performance is almost same for a cluster with 4/6/8 nodes.

The server code is below.

"use strict";

var http = require('http');
const PORT = 8080;

var total = 0;

var server = http.createServer(function(request, response) {
  total++;

  if ((total % 1000) === 0) {
    console.log("Completed:" + total);
  }
  response.end('It Works!! Path Hit: ' + request.url);
});

server.listen(PORT, function() {
  console.log("Server listening on: http://localhost:%s", PORT);
});

The following code is used for Clustering.

"use strict";

const cluster = require('cluster');
const numCPUs = require('os').cpus().length;
var http = require('http');
const PORT = 8080;

var total = 0;

//numCPUs = 6;
//numCPUs = 4

console.log("Number of CPUs" + numCPUs);

if (cluster.isMaster) {
  // Fork workers.
  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  cluster.on('exit', (worker, code, signal) => {
    console.log(`worker ${worker.process.pid} died`);
  });
}
else {
  var server = http.createServer(function(request, response) {
    total++;

    if ((total % 1000) === 0) {
      console.log("Completed:" + total);
    }
    response.end('It Works!! Path Hit: ' + request.url);
  });

  server.listen(PORT, function() {
    console.log("Server listening on: http://localhost:%s", PORT);
  });
}
Sign up to request clarification or add additional context in comments.

2 Comments

Awesome! Thanks for your answer. I'll try too with JMeter. ;)
Great! May I request that you do the same using pm2 in cluster mode using the initial code? I'm curious to see the difference between node managing the clustering itself and pm2.
0

I'm not really sure of the point of this question, but a quick test on a MacBook Pro, with 16GB of RAM and Node 5.0.0

Requests per second:    6839.72 [#/sec] (mean)
Time per request:       14.620 [ms] (mean)

Percentage of the requests served within a certain time (ms)
50%     14

2 Comments

Hi thanks for your answer. The result makes sense given the power of your computer. The point of this question is to discover if there are specific settings on the system or on the node app itself that would allow it to offer even higher default performances. For example it would be great to have a lower latency for only 50 concurrent calls.
I see, well as you kind of hint at yourself, most likely your server has multiple cores, so a typical production setup would use some sort of clustering as node is a single threaded process (ish). Certainly when I ran this test only one of the 8 cores is used. Maybe PM2 that you mentioned, or just nginx and a processes manager such as supervisor.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.