Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jersey-dev] [External] : client lifecycle question

I'm sorry - correction: The workaround is to call HttpsURLConnection.getDefaultSSLSocketFactory().

On Wed, May 25, 2022 at 8:56 PM John Calcote <john.calcote@xxxxxxxxx> wrote:
So, I discovered the answer, for anyone else that may be doing what I'm doing (heavily multithreaded async Jersey client). The problem is called out in this JDK bug:


It was discovered and logged in 2016 and found to exist then on JDK version 1.8.0_92-b14. I'm currently running on 1.8.0_312-b07 and the bug still exists. The ticket above is still open. Seems like it wouldn't be too hard to fix, since the work around is really trivial (also called out in the bug): Just call HttpUrlConnector.getDefaultSSLContext() at least once before you start using your client. I added this line of code (with a large comment describing why it was necessary) and now I no longer get the SSLHandshake errors - because it's using my custom SSLContext in all threads, rather than the default in most of them and mine in one.

HTH,
John


On Wed, May 25, 2022 at 9:36 AM John Calcote <john.calcote@xxxxxxxxx> wrote:
One more pretty important bit of information on this setup: The clients are completely asynchronous and heavily multi-threaded. A client is likely to be used by multiple threads at once to send messages (currently to a single host, as my application is configured to use one client per out-bound route).

On Wed, May 25, 2022 at 9:28 AM John Calcote <john.calcote@xxxxxxxxx> wrote:
Hi Jan,

Thanks for the response. I've updated my code accordingly. And thanks for the insight on clients. I have a related question that may be at least partially explained by what you told me about clients being heavy-weight.

In my application, I own both the client and the server. The server is a secure embedded Grizzly server. It's configured to "want" and "need" client authentication. It's using a simple self-signed certificate. It does, however, use a dynamic x509 trust store of my own design. It's reasonably simple, however - it merely accepts modifications to an internal keystore, which it then reapplies to a new trust manager before any TrustManager interface requests are satisfied. I've used this technique in the past and it seems to work well.

The client is a Jersey 2.35 client (I can't upgrade to 3.x yet due to the Jakarta dependency issue). It's configured with an SSLContext that uses a client-side private-key/cert, and a simple trust-all trust manager. (The application is more interested in using TLS to encrypt the pipe and to identify clients than to ensure against MIM attacks, so we're not really concerned about server spoofing.) I initially used the default HttpUrlConnector, but I was getting intermittent SSL handshake failures due to "SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target" errors. 

Generally, in my experience, such errors either occur all the time, or never. I've never heard of them being intermittent before. 

I played with this configuration for a solid week, but every tweak I made seemed only to make the problem occur more often, rather than less. Finally, on a whim, I decided to try a different connector provider. I plugged in the ApacheConnectorProvider and, suddenly, all the SSL handshake failures disappeared! Weird! How could a different connector provider affect how the underlying JSSE component works?

Well, it seemed to work at first, but then I realized it wasn't working perfectly. The problem I was now having was that the Apache connector provider seemed to be limiting the number of connections I could make between my clients and servers. 

My application establishes a separate Jersey client for each server it connects to. These clients are cached and reused for each message sent to the associated host. In my test setup, I have four hosts, each acting as both a client and a server. Therefore, each client component talks to four other hosts (including itself). The client retries some failures on an exponential backoff delay. 

The problem is, a dozen or so connections would be made and then nothing would happen. I have a timeout reaper thread that monitors outstanding requests and times them out after 10 minutes. 10 minutes after the hang, all my threads come back with timeout errors, telling me that the Apache connector provider is simply freezing up and not sending these additional requests. 

So, I thought, why not try the pooling connection manager? (I'm now just complicating the situation, I know.) Here's the strange part - when I plug the pooling connection manager in, I start getting the SSL handshake errors again!

If you (or anyone else in the community here) have any thoughts on what might be causing this strange behavior, I would very much appreciate your insights. I've been using Jersey and Grizzly for years now, and I can usually solve any issues on my own at this point, but this one has me totally stumped.

Regarding how this connects with the previous question: Given what you said about clients being heavyweight and not really needing to be closed because of their long-lived nature, should I be using a single client to connect to all hosts? If using a few clients - one for each host - is ok, should I be using a single ApacheConnectorProvider instance for all clients? A single pooling connection manager?

Kindest regards,
John

On Tue, May 24, 2022 at 10:59 AM Jan Supol <jan.supol@xxxxxxxxxx> wrote:
Hi John,
>From the top of my head, I'd say closing the client at point A nor B will work. The client should not be closed before the response is read, closing the client will drop Jersey runtime classes required to read the response properly. So closing it at point A or B could work or might not work, depending on whether the garbage collection took place or when the thread invoking the InvocationCallback will process it.

Note that the client does not need to be closed after a response, it could be reused for a new request.

The client itself is a lightweight object, but it keeps WeakReferences to heavy runtime objects. Closing the client makes sure the heavy runtime objects are released. However, Jersey tries to release the heavy objects on its own, when not needed (garbage collected). In some cases, however, the users keep references to Jersey internal objects, such as ClientResponse which prevents the proper garbage collection (and resource release). The client is better to be closed, then. In an ideal situation not closing the client should not cause leaking. I have seen more issues caused by not closing the Response.

HTH,
Jan

From: jersey-dev <jersey-dev-bounces@xxxxxxxxxxx> on behalf of John Calcote <john.calcote@xxxxxxxxx>
Sent: Saturday, May 21, 2022 1:25 AM
To: jersey-dev@xxxxxxxxxxx <jersey-dev@xxxxxxxxxxx>
Subject: [External] : [jersey-dev] client lifecycle question
 
Hello Jersey developers,

I have a question regarding when a client may be closed:

Client client = ClientBuilder.newBuilder()...<setup additional attributes>...build();
...
client.target(baseUri)
   .path(OSVS_CGC).path(sessionId.toString()).path(UNLOCK)
   .request()
   .async()
   .put(Entity.entity(req, MediaType.APPLICATION_JSON), new InvocationCallback<Response>() {

       @Override
       public void completed(Response response) {
           client.close();                   <---- B
           response.getStatus();
           SomeClass sc = response.readEntity(SomeClass.class);
           // process response
           ...
           client.close();                   <---- C
       }

       @Override
       public void failed(Throwable t) {
           client.close();                   <---- B
           // handle failure
           ...
           client.close();                   <---- C
       }
});
client.close();                              <---- A

Please, please, please do not rake me over the coals for closing my client right after using it once. That is not the point of this question. (I've already been through that ringer with some jackass over on stackoverflow - I finally had to delete my question over there because he hijacked it for his own selfish prideful purposes.)

This example is simplistic on purpose. In reality my code doesn't call close(), but rather, merely reduces the reference count on the client. The client cache also has a reference that keeps it alive for the life of the process. When the cache finally shuts down, it releases its reference, and if anyone is using the client at that moment, their reference keeps it open until the last guy is done using it.

The main point is this: Do the jersey client code authors expect me to drop my reference to (and potentially close) the client at point A, point B, or point C?

My rationale for asking this question is simple: I want to do things right so as not to close it prematurely. Yes, I could play it safe and always just close it a point C - that would always work. However, if any code in completed or failed threw an exception, I'd have to catch that and close in a finally clause to make sure I actually closed the resource.

Thanks in advance,
John Calcote
_______________________________________________
jersey-dev mailing list
jersey-dev@xxxxxxxxxxx
To unsubscribe from this list, visit https://www.eclipse.org/mailman/listinfo/jersey-dev

Back to the top