Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-dev] Trying to make sense of slow server response times

Thanks for your explanations, this makes sense. And sure it all depends what one wants to measure. My intention here wasn't really to test the server. I was just wondering why my worker method is running slower in the context of the Jetty server as the benchmark I provided shows.

On 09.05.22 14:17, Greg Wilkins wrote:
On Mon, 9 May 2022 at 12:46, <easbar.mail@xxxxxxxxxx> wrote:

  > So firstly, this is a really bad way to benchmark a server:

Ok, but how can I benchmark a server without using (some) client code?
And assuming I can't what kind of client code should I use instead? All
I need to do is sent a GET request and receive the response (containing
a single number).

The answer is that it depends on what you really want to measure.

Too many benchmarks end up trying to measure throughput over a few very
busy connections, when users don't care about throughput and most
webservers have many mostly idle connections.   I have seen benchmarks
reported that said a server can do 1,000,000 requests per second..... but
the test was measured over 30 seconds and the average latency of all the
requests was 30s!  So the deployers would be happy saying we are handling a
million requests a second, but the users would all be unhappy because every
request took 30s.

So more often than not, what you really want to measure is quality of

  To do that, you need a client setup that can offer a fixed known request
rate over a realistic connections behaviour (eg each connection might do a
burst of 10 requests, then be idle for many seconds, then do another burst
of 10 request... and all 20 requests access the same set of application

You then measure average latency, maximal latency and error rate.  If they
fall within your acceptable bounds, then you know your server can handle
that load.  You then increase the test load offered and measure again.
Lather, rinse, repeat!

JMH is seldom going to be suitable for such macro benchmarks.  It is
awesome for micro benchmarks, but I don't  think it well handles external
resources that must be shared between iterations and contested on.


jetty-dev mailing list
To unsubscribe from this list, visit

Back to the top