In this post, I compare Node.js, Ruby, Python, and C (optimized and unoptimized) in basic data structure, looping, and function calling operations. Node.js wins hand down.

Node.js, Ruby, Python, versus C

March 28th 2020

Nearly five years ago, I created the repository NodeJS Performance. At the time, my goal was to figure out just how fast Node.js really was compared to other similar languages like Ruby and Python. Google had already done quite a bit of work to improve the performance of the V8 engine in Chrome and I wanted to see just how much of an effect this had.

The posts that really inspired me were YouTube videos like this (over 10 years ago!):

Google I/O 2009 - V8: ..High Performance JavaScript Engine
Why is Google Chrome Fast? Spotlight on V8 JavaScript Engine

The last couple of months inspired me to go back and rebuild my old performance comparison and compare the most recent versions of Ruby, Python, Node.js and GCC to see if my original conclusions still hold up.

In the last ten years, Node.js has continued to hold the top position behind optimized C compared to Python and Ruby. This is for basic mathematical operations, function calling, type casting, and loops.

The results are clear. Node.js 12.x is an undisputed top performer compared to Ruby and Python. Why people still tout Python for statistics and mathematical operations is completely beyond me. I suppose the only reason is the existence of statistical libraries for Python and the endless hype against Javascript based on its origins.

To put this into perspective, it would require a Python process over 50x the number of CPUs to perform the same math as as Node.js process:

March 2020 Results
LanguageTime (ms)Multiplier (x)
C (optimized) 1100.0.33.179181.00
Node.js (v12.16.1)1,1271.23
C (unoptimized) 1100.0.33.173,6323.96
Ruby (2.3.7p456)34,66937.77
Python (3.8.1)72,36978.83

The results are fairly clear that Ruby is 30x slower than Node.js and the latest version of Python is 64x slower than Node.js.

These results are nearly identical to the when I ran this test nearly 5 years ago. This is not for complex operations or deep mathematical libraries. This is for basic mathematics for variables that should already be loaded into CPU cache. See the code for yourself, in C:

#include <stdio.h>
#include <time.h>

long double func (long double a) {
  return a / (long double)1000.0;

int main () {
  clock_t start = clock();

  long double d = 0.0;

  for (long double i = 0; i < 100000000; i++)
    d += (long double)((long long)i >> 1);

    if (((long long)d) % 2 == 0)

    d += func(i);

  clock_t end = clock();
  clock_t elapsed = (end - start) / (CLOCKS_PER_SEC / 1000); 

  printf("%Le\n", d);
  printf("%lu ms\n", elapsed);

Feel free to run this yourself, here is my raw output of the results:

----- Node Version -----
----- GCC Version -----
Configured with: --prefix=/Applications/ --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/4.2.1
Apple clang version 11.0.0 (clang-1100.0.33.17)
Target: x86_64-apple-darwin18.7.0
Thread model: posix
InstalledDir: /Applications/
----- Python Version -----
Python 3.8.1
----- Ruby Version -----
ruby 2.3.7p456 (2018-03-28 revision 63024) [universal.x86_64-darwin18]
1) C Unoptimized -----
gcc for.c: 
3632 ms
2) C Optimized -----
gcc -Wall -O2 for.c
918 ms
3) Node -----
node for.js
1127 ms
4) Ruby -----
ruby for.rb
34668.87899999999 ms
5) Python -----
72369 ms

To be fair, I am quite aware that performance is a complex thing to measure. My measurements here are based on my understanding of the basic compiler optimizations like loop unrolling and sub-expression elimination. I have not addressed complex things like cache optimization and others. My goal here was to test the very basics of mathematical analysis and type-casting.

My general conclusion is that for doing repeated operations that involve function calls and variables of various types (e.g. integer vs. double) it is true that Node.js performs on par with optimized C whereas Ruby and Python fall far, far behind.