Hi Simon,
Hi,
On Tue, Feb 19, 2019 at 2:11 AM Raoul Duke <rduke496@xxxxxxxxx> wrote:
> to try to zone in on the above I'm just trying to understand the basic flow control workflow. lets say I had (say) 1000 clients all sending large HTTP PUTs to the proxy concurrently with large files (10MB, say).
>
> my assumptions are as follows (please correct any that are wrong as that would be a big help):
> * a single socket read of $chunk_size will be performed on each client connection to the extent of the max read size. (which seems to be 4K by default but correct me if I'm wrong)
Correct.
Note that if your clients send concurrent requests, Jetty at the proxy
will allocate a number of threads to serve those requests
concurrently, where the number of threads depends on the thread pool
configuration.
So if you have the default thread pool of 200 threads, and you have
100 concurrent clients, Jetty will allocate 100 threads (or so) to
serve the concurrent requests.
Note also that AsyncMiddleManServlet is completely asynchronous, so if
those requests upload large files, Jetty will read the request content
asynchronously.
This means that if the clients are "slow" to send content, Jetty will
return idle threads to the thread pool and only use threads when there
is content available.
What happens in the case where there are (say) 100 threads and 100 concurrent clients all sending furiously (at LAN speed) for very large uploads.
it looks like the thread will spin in a "while" loop while there is still more data to read. so if that is correct then couldn't all 100 threads be occupied without those
long lived and very fast uploads such that concurrent client 101 is frozen out of getting its upload payload shunted?
It wouldn't be my expectation that it would work as above but I see some evidence that it is. is there configuration parameters I should be checking on here
that would influence this?
Hope this helped.
it does help. Thanks so much for taking the time to respond.
RD.