Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] Flow control in AsyncMiddleManServlet

Hi,

On Tue, Feb 19, 2019 at 4:48 PM Raoul Duke <rduke496@xxxxxxxxx> wrote:
> What happens in the case where there are (say) 100 threads and 100 concurrent clients all sending furiously (at LAN speed) for very large uploads.
> it looks like the thread will spin in a "while" loop while there is still more data to read.  so if that is correct then couldn't all 100 threads be occupied without those
> long lived and very fast uploads such that concurrent client 101 is frozen out of getting its upload payload shunted?

You have 100 clients _trying_ to upload at network speed.
Let's assume we have exactly 100 threads available in the proxy to
handle them concurrently.
Each thread will read a chunk and pass it to the slow write to the
server. The write is non-blocking so it may take a long time but won't
block any thread.
If the write is synchronous, the thread will finish the write and go
back to read another chunk, and so on.
If the write is asynchronous, the thread will return to the thread
pool and will be able to handle another client.

Chances are that in your setup the 100 clients will eventually be
slowed down by TCP congestion, and therefore won't be able to upload a
network speed.
This is because the proxy is not reading fast enough because it has to
do slow writes.

The moment one I/O operation on the proxy goes asynchronous (i.e. read
0 bytes or write less bytes than expected), the thread will go back to
the thread pool and potentially be available for another client.

In the perfect case, all 100 threads will be busy reading and writing
so the 101st client will be a job queued in the thread pool waiting
for a thread to be freed.
In the real case, I expect that some I/O operation or some scheduling
imbalance (I assume you don't have 100 hardware cores on the server)
will make one thread available to serve the 101st client before all
the previous 100 are finished.
E.g. client #13 finishes first so its thread will be able to serve
client #101 while the other 99 are still running.

For HTTP/1.1, your knob is the thread pool max size: the larger it is,
the more concurrent clients you should be able to handle.
The smaller it is, the more you are queueing on the proxy and
therefore pushing back on the clients (because you won't read them).
If it is too small, the queueing on the proxy may be so large that a
client may timeout before the proxy has the chance to read from it.

Alternatively, you can configure the proxy with the QoSFilter, which
throttles the number of concurrent requests that can be served
concurrently.
Or you can use AcceptRateLimit or ConnectionLimit to throttle things
at at the TCP level.

-- 
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.


Back to the top