Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-dev] Trying to make sense of slow server response times

Thanks, Greg :)

> If I'm understanding your benchmark correctly, the measureSumArrayHttp test is measuring average latency of:

Yes, this is correct.

> So firstly, this is a really bad way to benchmark a server:

Ok, but how can I benchmark a server without using (some) client code? And assuming I can't what kind of client code should I use instead? All I need to do is sent a GET request and receive the response (containing a single number).

> Given all that, you need to find out where the extra latency is coming from, with my two suspects being either the client itself or connection establishment time due to different reuse policies.

Yes, this is exactly my goal. However, I am pretty sure the client and connection establishment can only be one part of the story, because such client/connection establishment overhead would not depend on the size of the array (but the overhead does, it increases proportionally to the array size. at least for some of the machines I tried). I also used different profilers and logged the calculation time on the server side to verify that the vast majority of the measured time is actually spent on the array summation code.

> Enable the jetty stats handler and connection statistics and that will tell you latencies within the servlet and how many requests per connection etc.

Thanks for these suggestions. I will try the stats handler and log the connection statistics. Even though I don't think this can explain the cases where the overhead is proportional to the array size it might still be useful to understand those with the constant overhead.


On 09.05.22 12:19, Greg Wilkins wrote:
If I'm understanding your benchmark correctly, the measureSumArrayHttp test
is measuring average latency of:

    - send a HTTP request via a client
    - jetty receives the HTTP request
    - the summation code runs in a servlet
    - jetty produces a response
    - the client receives the response

So firstly, this is a really bad way to benchmark a server:

    - you will include time in client
    - you don't know the client and/or server connection reuse policy, so
    your latencies may also include connection establishment times
    - If you are concerned by throughput, then the rate requests are offered
    is a function of latency, which is not real world.
    - If the client is using a single connection, then you are likely to
    only be using a single CPU core on the server, so throughput will look
    less.... but then it won't be contending with other server threads for
    shared resources, so latency may be better

Given all that,  you need to find out where the extra latency is coming
from, with my two suspects being either the client itself or connection
establishment time due to different reuse policies.

Enable the jetty stats handler and connection statistics and that will tell
you latencies within the servlet and how many requests per connection etc.
   That will help you narrow down where the issue is.

cheers






On Sun, 8 May 2022 at 10:03, <easbar.mail@xxxxxxxxxx> wrote:

Dear Jetty Devs and Community,

hopefully I've come to the right place to ask for help with the
following issue. I am puzzled by the server response times I am
measuring using a Jetty instance. I reduced the issue such that now
there are only two HttpServlets. One just returns a constant number and
serves as a baseline. The other calculates and returns the sum of all
integers in an array. I also measure the time the actual summation takes.

On some machines everything seems fine: The response time of the servlet
that does the summation is roughly equal to the time the summation is
expected to take. Querying the server only adds a minimal overhead.

But on others there is significant overhead (the servlet response is
much slower than the actual summation), and on some the overhead is even
proportional to the size of the array!

You can find the benchmark, instructions how to run it and a more
detailed explanation of my results (in README.md) here:

https://github.com/easbar/jetty_jmh_benchmark/commit/f32ca12256021c69589a90f460097887ac802740.

I am referencing to this specific commit to avoid confusion in case
there will be changes to the benchmark later.

I have tried using profilers and JVM flags like `-XX:+PrintCompilation`
to find out what causes this slow down, but the server response times I
am measuring still remain a complete mystery to me. They are very
reproducible, i.e. whenever I run the benchmark I pretty much get the
same result, but (qualitatively) different results for different
machines. I also tried different JDK versions (8,11,17), and Jetty 9&11,
but that does not seem to change anything.

Frankly speaking I don't really know what else to try to get to the
bottom of this, which is why I came here.

Any kind of help would be highly appreciated, e.g.

* running the benchmark and posting your results
* looking at the results in README.md and giving some advice what I
could do to debug this, or thinking about what *could* be the cause for
such behavior

Thanks in advance!





_______________________________________________
jetty-dev mailing list
jetty-dev@xxxxxxxxxxx
To unsubscribe from this list, visit
https://www.eclipse.org/mailman/listinfo/jetty-dev




_______________________________________________
jetty-dev mailing list
jetty-dev@xxxxxxxxxxx
To unsubscribe from this list, visit https://www.eclipse.org/mailman/listinfo/jetty-dev


Back to the top