Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-dev] Clarification on Request Timeouts

Hi,

On Thu, Jun 1, 2017 at 12:15 AM, Neha Munjal <neha.munjal3@xxxxxxxxx> wrote:
> Thanks Simone.
>
> I agree that load test is not the correct scenario here. But the point I
> wanted to highlight was that we would have bulk number of requests being
> processed asynchronously by a client object, which also implies that some
> requests will be queued up, depending upon the bandwidth, latency and
> processing capacity. Additionally, we can have intermittent synchronous
> HTTP/2 requests which have a timeout configured to be processed by the same
> client object. In scenarios when server is busy processing the requests and
> we have some async requests already queued up, the sync requests would also
> queue up, and there is a possibility of those timing out while they are
> queued.

Yes. You are overloading the client.

Typically you want to apply the backpressure that the server is
applying to HttpClient back to the application that sends the
requests.
This is easily done with a Request.QueuedListener and a
Response.CompleteListener.
In the first you increase a counter, in the latter you decrease the
counter, and you only allow an application to send a request if the
counter is within a certain range.

> It looks like that we should have separate HttpClient objects for sync and
> async modes, to avoid requests timing out,

I don't see how this would solve your issue.

You have a fast sender and a slow receiver. It does not matter if you
use different HttpClient objects, you still have a fast sender and a
slow receiver.
You need to apply backpressure to the application if you don't want
your requests to time out.

-- 
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.


Back to the top