|Re: [jetty-users] Interleaving requests for fairness/starvation?|
Thank you very much for your suggestions, they're really useful.
Yes - I was thinking that we need to limit the total number of threads, as each of them can use a lot of memory. I guess we could increase the thread pool size to something large, and put a fair semaphore around the actual work. After 50ms work is done, a startAsync followed by an immediate redispatch would, I think, preserve the behaviour of ThreadLimitHandler - particularly that requests from a single client remain interleaved, due to the FIFO queue per Remote. (correct me if I'm wrong!)
I should also have mentioned that writing the response (and reading from disk) is much of the work of each 50ms chunk. I guess it's something like a file server. So the clients (which are generally not browsers) aren't waiting minutes for the HTTP headers, and we need to stream the response (as they can be very large).
With the concurrent queue idea, I guess after each chunk of work, you could always add yourself to the back of the queue and dispatch a request from the front (which could be yourself - in that case it doesn't matter that you're hogging a thread). You'd also need to reserve one thread to accept new requests, which are immediately put on the queue (so no blocking work is done). (I think you'd also need a reserved thread in your suggestion?)
Agreed that (assuming you agree with it) the non-async approach is probably better. (It's also clearly better than a separate thread pool!)
From: jetty-users-bounces@xxxxxxxxxxx <jetty-users-bounces@xxxxxxxxxxx> on behalf of Greg Wilkins <gregw@xxxxxxxxxxx>
Sent: 01 June 2020 13:31
To: JETTY user mailing list <jetty-users@xxxxxxxxxxx>
Subject: Re: [jetty-users] Interleaving requests for fairness/starvation?
Are you doing this to try to limit the total threads, or just to make sure that some threads are not starving others? If you don't mind the total number of threads, then you could simply put in a Thread.yield() call every 50ms and not worry about async servlet handling.
Note also, from a users perspective, having requests that can take many minutes to respond is not great. If your clients are browsers, then it could be better to separate your handling from the request/response thread - ie start the handling and then immediately respond. Your clients could then poll (or better yet long poll) to see progress and to get the final state.
I'm very cautious about any approach that changes the behaviour of the threadpool, especially tryExecute. The behaviour there is all very finely tuned to our scheduling strategies to avoid deadlocks and thread starvation.
If you do want to use servlet async to do the interleaving, you need to get other requests/threads to do the dispatching. EG every 50ms of processing your app can look at a concurrent list to see if other processes are waiting to be handled, if they are, then dispatch the first of them then start async and add your AsyncContext to the end of the queue. The problem with this approach is how does the first request ever get added to the queue? Perhaps you need a limit on concurrent handling as well and if you are above that, add yourself to the queue... if you finish handling and reduce the handling count below the threshhold, then you can also wakeup any waiting contexts on the queue. This is making Thread.yield feel very simple.
On Mon, 1 Jun 2020 at 13:51, Matthew Boughen <knabbers@xxxxxxxxxx> wrote:
Back to the top