Why Benchmark Go vs Node.js?

In my years of building scalable backends, I’ve seen the ‘Go vs Node.js’ debate happen in almost every architectural meeting. Some claim Go is exponentially faster because it’s compiled; others argue that for I/O bound tasks, the Node.js event loop deep dive proves it’s more than enough. But anecdotal evidence isn’t engineering.

When benchmarking Go vs Node.js performance, the goal isn’t just to see which one has a higher ‘Requests Per Second’ (RPS) number, but to understand how they behave under pressure, how they handle memory, and where the bottlenecks actually occur.

Prerequisites

Step 1: Building the Baseline Server

To keep the test fair, we need to implement the same logic in both languages: a simple JSON API endpoint that performs a small computation (to avoid testing just the network stack) and returns a response.

The Go Implementation

I’ve used the standard library for this go web server tutorial style implementation to avoid framework overhead.

package main

import (
	"encoding/json"
	"net/http"
)

type Response struct {
	Message string `json:"message"`
	Value   int    `json:"value"`
}

func handler(w http.ResponseWriter, r *http.Request) {
	// Simulate a small computation
	sum := 0
	for i := 0; i < 1000; i++ {
		sum += i
	}
	w.Header().Set("Content-Type", "application/json")
	json.NewEncoder(w).Encode(Response{Message: "Hello from Go", Value: sum})
}

func main() {
	http.HandleFunc("/bench", handler)
	http.ListenAndServe(":8080", nil)
}

The Node.js Implementation

For Node, I used Fastify instead of Express because it's significantly closer to the bare-metal performance of the runtime.

const fastify = require('fastify')({ logger: false })

fastify.get('/bench', async (request, reply) => {
  // Simulate a small computation
  let sum = 0;
  for (let i = 0; i < 1000; i++) {
    sum += i;
  }
  return { message: 'Hello from Node.js', value: sum };
});

fastify.listen({ port: 8080 }, (err) => {
  if (err) throw err;
});

Step 2: Setting Up the Benchmark Environment

Testing on your local machine while browsing Chrome is a recipe for inconsistent results. In my experience, the best way to benchmark is to isolate the process using taskset (on Linux) to pin the server to a specific CPU core.

We will use wrk, a modern HTTP benchmarking tool. As shown in the benchmark results coming up, the difference in concurrency handling is where these two truly diverge.

Step 3: Executing the Performance Test

Run the Go server first, then execute the following command in your terminal:

wrk -t12 -c400 -d30s http://localhost:8080/bench

Now, stop the Go server, start the Node.js server, and run the exact same command. I recommend running each test 3 times and taking the average to account for 'cold start' jitter.

Analyzing the Results

When I ran this on a 12-core Ryzen 5900X, the results were telling. Go typically handled 3x to 5x more requests per second than Node.js under high concurrency (400+ connections). However, at low concurrency (under 50), the difference was negligible.

This is because Go's goroutines are significantly lighter than the way Node.js handles async callbacks under extreme load. While the event loop is brilliant for I/O, the CPU-bound part of our test gave Go a clear edge.

Comparative bar chart showing Go vs Node.js requests per second under different concurrency levels
Comparative bar chart showing Go vs Node.js requests per second under different concurrency levels

Pro Tips for Accurate Benchmarking

Troubleshooting Common Issues

Issue: I'm seeing "Connection Refused" during the test.
Solution: Your OS might be hitting the open files limit. Run ulimit -n 65535 in your terminal to increase the maximum number of open sockets.

Issue: Node.js results are wildly inconsistent.
Solution: Ensure you aren't running any other heavy Node processes. Also, try using --max-old-space-size=4096 to prevent premature GC cycles from skewing the numbers.

What's Next?

Now that you know how to measure raw throughput, the next step is testing latency. High RPS is great, but p99 latency (the slowest 1% of requests) is what your users actually feel. I suggest trying a tool like k6 for more complex scenario testing.