Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] HTTP/2 multiplexing

It makes sense that changing the number of connections will make a huge difference since the HTTP/1.1 case will only allow 1 concurrent request. My hope was that even if I use HttpClient.setMaxConnectionsPerDestination(200), I would still see an improvement on the HTTP/2 side.

The idea is that there is a limit on the rate of IP packets sent between machines. Therefore, even when using multiple connections there is a limit to the amount of traffic that can pass between two machines. (Note: number of packets, not total bandwidth)

HTTP/2 uses a single TCP connection for multiple outstanding requests. Since TCP is a byte-pipe, it can allow multiple requests to be transmitted in a single IP packet. This would allow the request rate to increase while the packet rate remains the same, thus increasing throughput.

The request rates I have seen for HTTP/1.1 are in the range for most tests I've done on Linux/MacOS systems (50k-75k packets/s). Using a single TCP socket with multiple requests per packet optimization, I've seen much higher throughput (200k-1000k requests/s). Unfortunately, my testing used a primitive protocol and I haven't spent the time to see if I can repeat the results using HTTP.

Thanks for the advice. I'll keep investigating and see if the idea has any merit.

On Mon, Jul 27, 2015 at 5:54 AM, Simone Bordet <sbordet@xxxxxxxxxxx> wrote:
Hi,

On Mon, Jul 27, 2015 at 12:19 AM, Sam Leitch <sam@xxxxxxxxxx> wrote:
> https://github.com/oneam/test-http2-jetty
>
> I ran a test using 2 m4 x-large EC2 instances. The ping time between them
> was sub millisecond (~0.100ms)
>
> Using HTTP/1.1 I was able to get ~40000 requests/s with a sub-millisecond
> median latency. Anything higher would cause a latency spike.
> Using HTTP/2 I was able to ~50000 requests/s with a sub-millisecond median
> latency. Again, anything higher would cause a latency spike.
>
> That's an improvement, but not as significant as I have seen for similar
> protocol changes.
>
> I've done similar tests (ie. going from single request/response per TCP
> socket with multiple connections to multiple out-of-order concurrent
> request/response on a single TCP socket) and witnessed a 5-10x improvement
> in throughput. I was hoping to see something similar with HTTP/1.1 ->
> HTTP/2.
>
> (I know. Lies, damn lies, and benchmarks)

Exactly :)

By default HttpClient has a parallelism of 64 when running in HTTP/1.1 mode.
In your benchmark, very likely you are opening more than 1 connection
(up to 64 if needed) to the server when using HTTP/1.1.
With HTTP/2, you only open one.
Use HttpClient.setMaxConnectionsPerDestination(1) if you want to
compare single connection performance.

How does it fare with this change on your AWS setup ?

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
jetty-users@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


Back to the top