When 500% Faster is Garbage
I want to get this out of the way now, so later this month when I publish the next major release of the hapi framework for node — v17.0.0 — no one brags about the fact it is 500% faster.
I’ve said it before, but it is worth saying again: benchmarking frameworks is fucking stupid.
But now I get to demonstrate it with numbers without having to mock anyone else’s framework. The problem I had until now was that I was the author of the “slowest” framework, so my arguments against benchmarks were “tainted” with the fact my code “sucked”. You might have thought: of course he was going to be dismissive of numbers showing hapi was slow because hapi’s numbers were terrible in comparison.
On my local machine, hapi v16 handled 3,500 requests/second when using the Fastify benchmark scripts. A recent build of v17 handled 17,800 requests/second. That’s a 508% improvement. Almost a tie with a bare Express benchmark at 18,500 requests/second.
Garbage.
Is being able to handle 5 times more request per second not better? Well, no. Not when the payload we are delivering back is a tiny useless JSON blob. If your production application is a hello world server, then sure, 500% is amazing. But then again, why use a framework at all?! Also, how do you still have a job?
Let’s look at these numbers using a unit that actually matters: framework delay. This is the amount of time on average, the framework itself takes to process a request, the overhead. In this case, v16 takes about 0.29ms and v17 about 0.056ms. A difference of 0.234ms. That’s not 234ms — it’s 0.234ms!
In other words, with everything else being equal, hapi v17 has the potential (because V8 optimizations tend to make code perform very different based on what code is actually loaded into memory) to return responses 0.234ms faster.
500% faster = 0.234ms faster = Meh.
If you need those extra 0.234ms, you should not be using any framework. For example, Fastify’s own benchmarks on much better hardware show 34,613 request/second. That’s 0.029ms delay — almost twice as fast as hapi v17 and yet, only 0.027ms faster per request.
Want more numbers? Bare node v8.6.0 on the same hardware Fastify was tested gets 39,952 request/second. That’s 0.025ms delay. Too slow? Maybe node is not the right solution for your very demanding application. And btw, that’s a super impressive accomplishment for Fastify, getting 0.004ms away from bare node!
If it is so stupid, why am I running benchmarks at all? Because I am running them against my own baseline to ensure new versions perform as good or better than previous ones. Benchmarking is a great tool to measure the impact of new architectures and decisions. In hapi v17 case, I had to ensure async/await will work as fast as callbacks (they do).
So, don’t expect the hapi v17 release notes to promote the fact the new version is 500% faster. It might be a nice number to brag about but it still means nothing.