Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[jetty-users] HttpClient and timed out exchanges

G'day Jetty Users.

I've been a long time Jetty server user, but have only just recently worked on a project that required me to use the Jetty client component.  I have some questions about how it behaves and how I should be using it in exceptional circumstances.  I hope you can help. :-)

I am using jetty-client-7.4.2.v20110526.jar (and it's dependencies).  Here's my story ...

I need to poll an HTTPS end point over which I have no control.  The basic idea is that my software polls the end point, waiting for data - when a message is available it is returned and I process it.  I then poll again to get the next message.  I do this all day, every day.

If data is not immediately available, the server holds the connection open for a while.  This is designed so that as soon as data is available, it can be delivered immediately -- effectively, making this a "push service" using HTTP.  My client code is meant to wait 30 seconds before timing out in these cases.

The service that I poll, however, does not produce data 24/7.  As such, there can be (variable) periods of several hours where I am sending HTTPS requests, connecting, waiting, timing out and trying again.  It's here that I seem to come unstuck.

This is how I configured the client:

    this.client = new HttpClient();
    this.client.setConnectorType(HttpClient.CONNECTOR_SOCKET);
    this.client.setTimeout(30 * Chrono.ONE_SECOND);                          // 30 second timeout
    this.client.setMaxConnectionsPerAddress(1);

And here is my polling code:

    while (true)                                                               // keep retrying until we get a response ...
    {
      final PushFeedExchange exchange = new PushFeedExchange(sessionId,        // prepare the exchange
                                                             date, 
                                                             lastSeq);
      
      try
      { 
        this.client.send(exchange);                                            // asynchronously send the request ...
        exchange.waitForPushDone();                                            // ... then immediately block waiting for a response
       
        return exchange.getResponseContent();                                  // return the push feed XML response
      }
      catch (PushFeedTimeoutException e)                                       // if we don't get a response in time ...
      {
        ;                                                                      // ... we'll immediately try again
      }
    }

PushFeedExchange is a subclass of ContentExchange.  It caches the headers (i.e. it passes true to content exchange constructor).  The method waitForPushDone() internally calls waitForDone().

  public void waitForPushDone() throws PushFeedTimeoutException, PushFeedAccessException, IOException
  {
    final int exchangeState;
    
    try
    {
      exchangeState = this.waitForDone();
      
      if (exchangeState == HttpExchange.STATUS_COMPLETED)
      {
        return;
      }
      else if (exchangeState == HttpExchange.STATUS_EXPIRED)
      {
        throw new PushFeedTimeoutException();
      }
      else if (exchangeState == HttpExchange.STATUS_EXCEPTED)
      {
        ...
      }
      else
      {
        ...
      }
    }
    catch (InterruptedException e)
    {
      ...
    }
  }


What I have discovered, however, is that even though the exchange times out, the HttpClient/exchange does not forcibly close the TCP connection.  My application gets a timeout reported, but the connection remains.

I did some experimentation by connecting to a servlet that sleeps for 40 seconds (note my timeout is set to 30 seconds).  This is what I see:

Time  0 [client]: send request
Time  0 [server]: gets request
Time 30: [client] client reports timeout
Time 30: [client] client reports sending next request
Time 40: [server] server closes first connection
Time 40: [server] server gets next request

Because I use 1 connection per address, it would appear that my subsequent exchange is queued until the first connection is closed by the server.

How can I get the HTTP client to close the connection when the timeout occurs?  Is there any other way I can stop the server from eating my connection pool (since raising the number of connections in the pool would not change the fact that if the server doesn't close the connection, I don't get it back).

Or, a broader question: what _should_ I be doing?

Kindest regards,

--
Greg


Back to the top