Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-dev] Help explaining why FutureCallback.get would be showing up as a hotspot in profiling?

Simone,

I'm not sure I follow you here.   In this case we specifically want a
blocking write because it is coming from a servlet output stream that
has blocking semantics.   So we want to block and the future callback
get() is the right place for us to be blocking.

The issue appears to be in this case that we are not waking up to
finish the write when we should.

Aaron,

what are you using to generate your load?  ideally I'd like to try to
generate the same load myself and try to reproduce.    Also some more
info about what the request are like would be good.  You say they do a
scan of an index, so I'm guessing that is some 10s of ms of busy CPU
time and then they write the response.  How big is the response? How
is it written? in lots of little writes or on one big write?

cheers











On 5 October 2012 07:03, Simone Bordet <sbordet@xxxxxxxxxxx> wrote:
> Aaron,
>
> On Thu, Oct 4, 2012 at 12:52 PM, Aaron Daubman <daubman@xxxxxxxxx> wrote:
>> Greetings,
>>
>> After moving from Jetty 7.6.5 to 9.0M0 we've seen a degradation in
>> performance and I am attempting to troubleshoot.
>>
>> The hotspot I am seeing is that several seconds may be spent in
>> FutureCallback.get (line 115), which is just: a _done.await()
>>
>> Would somebody please educate me on if this is a red flag or typically
>> symptomatic of some issue I might be able to focus on (or any other
>> possible indicators)?
>>
>> From jetty/util/FutureCallback.java:
>> ---snip---
>>  113    public C get() throws InterruptedException, ExecutionException
>>  114    {
>> *115        _done.await();
>>  116        if (_completed)
>>  117            return _context;
>>  118        if (_cause instanceof CancellationException)
>>  119            throw (CancellationException) new
>> CancellationException().initCause(_cause);
>>  120        throw new ExecutionException(_cause);
>>  121    }
>>
>> ---snip---
>>
>> Here's the partial call graph I am looking at:
>> ---snip---
>> org.eclipse.jetty.server.HttpOutput:write:174 (method time = 0 ms,
>> total time = 4885 ms)
>>  org.eclipse.jetty.server.HttpChannel:write:608 (method time = 0 ms,
>> total time = 4885 ms)
>>   org.eclipse.jetty.server.HttpConnection:send:410 (method time = 0
>> ms, total time = 4885 ms)
>>    org.eclipse.jetty.server.HttpConnection:send:372 (method time = 0
>> ms, total time = 4885 ms)
>>     org.eclipse.jetty.server.HttpConnection:blockingWrite:419 (method
>> time = 0 ms, total time = 4885 ms)
>>      org.eclipse.jetty.util.FutureCallback:get:115 (method time = 4885
>> ms, total time = 4885 ms)
>> ---snip---
>>
>> If background is helpful, this jetty instance is running a single Solr
>> war, has a 50G heap, typically uses 12-20Gig, and is being hit with
>> only around 50-100 requests per Minute. (each request does need to
>> scan a 70Gig solr index, however, before the upgrade we were seeing
>> average and pretty tight response times around 100-200 ms. Now they
>> vary a lot more wildly (regularly spiking to 9000ms) and average
>> 400-600ms.
>
> This spot is already on our TODO list, because HttpChannel should be
> completely asynchronous.
> In your particular case it seems there is a slow client and Jetty is
> waiting for this client to read the content.
> As I said, Jetty should not blocking wait.
> Will soon be fixed, bear with us :)
>
> Simon
> --
> http://cometd.org
> http://webtide.com
> Developer advice, services and support
> from the Jetty & CometD experts.
> ----
> Finally, no matter how good the architecture and design are,
> to deliver bug-free software with optimal performance and reliability,
> the implementation technique must be flawless.   Victoria Livschitz
> _______________________________________________
> jetty-dev mailing list
> jetty-dev@xxxxxxxxxxx
> https://dev.eclipse.org/mailman/listinfo/jetty-dev


Back to the top