Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-dev] Clarification on Request Timeouts

Hi Simone,

I agree. The way we control this is via a Semaphore on the application side (client side) that has permits equal to the maxRequestsQueuedPerDestination setting.
The send(..) API is only invoked in case this Semaphore has permits available, and any requests exceeding this parameter are blocked on the application side.
Also, we are making use of Response.CompleteListener for async requests which releases the acquired permit once the request response conversation has completed.

For sync requests we do not have any response Listener. We of course make use of the Semaphore to block any requests exceeding this parameter.
But, still, the moment this semaphore gives way to a request and we call the send() API, it implies that the request is queued up. And if the server is really busy processing other requests, there is a possibility that some of these requests, that have a timeout imposed, may timeout.

Our use case sends bulk requests in an asynchronous mode, with which I do not see any issues as there is no timeout associated with these requests. Just that there might be intermittent synchronous requests sent to the same client with a timeout imposed, that may timeout in case the server is really slow in processing the requests.

Thanks
Neha

On Thu, Jun 1, 2017 at 12:04 AM, Simone Bordet <sbordet@xxxxxxxxxxx> wrote:
Hi,

On Thu, Jun 1, 2017 at 12:15 AM, Neha Munjal <neha.munjal3@xxxxxxxxx> wrote:
> Thanks Simone.
>
> I agree that load test is not the correct scenario here. But the point I
> wanted to highlight was that we would have bulk number of requests being
> processed asynchronously by a client object, which also implies that some
> requests will be queued up, depending upon the bandwidth, latency and
> processing capacity. Additionally, we can have intermittent synchronous
> HTTP/2 requests which have a timeout configured to be processed by the same
> client object. In scenarios when server is busy processing the requests and
> we have some async requests already queued up, the sync requests would also
> queue up, and there is a possibility of those timing out while they are
> queued.

Yes. You are overloading the client.

Typically you want to apply the backpressure that the server is
applying to HttpClient back to the application that sends the
requests.
This is easily done with a Request.QueuedListener and a
Response.CompleteListener.
In the first you increase a counter, in the latter you decrease the
counter, and you only allow an application to send a request if the
counter is within a certain range.

> It looks like that we should have separate HttpClient objects for sync and
> async modes, to avoid requests timing out,

I don't see how this would solve your issue.

You have a fast sender and a slow receiver. It does not matter if you
use different HttpClient objects, you still have a fast sender and a
slow receiver.
You need to apply backpressure to the application if you don't want
your requests to time out.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
jetty-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev


Back to the top