[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
Re: [jetty-users] HTTP/2 multiplexing
|
Hi,
On Mon, Jul 27, 2015 at 12:19 AM, Sam Leitch <sam@xxxxxxxxxx> wrote:
> https://github.com/oneam/test-http2-jetty
>
> I ran a test using 2 m4 x-large EC2 instances. The ping time between them
> was sub millisecond (~0.100ms)
>
> Using HTTP/1.1 I was able to get ~40000 requests/s with a sub-millisecond
> median latency. Anything higher would cause a latency spike.
> Using HTTP/2 I was able to ~50000 requests/s with a sub-millisecond median
> latency. Again, anything higher would cause a latency spike.
>
> That's an improvement, but not as significant as I have seen for similar
> protocol changes.
>
> I've done similar tests (ie. going from single request/response per TCP
> socket with multiple connections to multiple out-of-order concurrent
> request/response on a single TCP socket) and witnessed a 5-10x improvement
> in throughput. I was hoping to see something similar with HTTP/1.1 ->
> HTTP/2.
>
> (I know. Lies, damn lies, and benchmarks)
Exactly :)
By default HttpClient has a parallelism of 64 when running in HTTP/1.1 mode.
In your benchmark, very likely you are opening more than 1 connection
(up to 64 if needed) to the server when using HTTP/1.1.
With HTTP/2, you only open one.
Use HttpClient.setMaxConnectionsPerDestination(1) if you want to
compare single connection performance.
How does it fare with this change on your AWS setup ?
--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.