Hi Joakim,
Thanks for the reply.
Nice to know about the JMX stuff.
Per this statement;
The wiki about high load doesn't mention the maxQueued configuration for QueuedThreadPool, as that's not related to high load in our minds.
It would be like trying to improve the performance of an F1 racer by changing the length of the spark plug wires.
Actually the Eclipse Wiki does mention it -- indirectly.
AFAICT, The ArrayBlockingQueue bit below is exactly equivalent when specifying maxQueued to the QueuedThreadPool code.
Thread Pool
It is very important to limit the task queue of Jetty. By default, the queue is unbounded! As a result, if under high load in excess of the processing power of the webapp, jetty will keep a lot of requests on the queue. Even after the load has stopped,
Jetty will appear to have stopped responding to new requests as it still has lots of requests on the queue to handle.
For a high reliability system, it should reject the excess requests immediately (fail fast) by using a queue with a bounded capability. The capability (maximum queue length) should be calculated according to the "no-response" time tolerable. For example,
if the webapp can handle 100 requests per second, and if you can allow it one minute to recover from excessive high load, you can set the queue capability to 60*100=6000. If it is set too low, it will reject requests too soon and can't handle normal load spike.
Below is a sample configuration:
<Configure id="Server" class="org.eclipse.jetty.server.Server">
<Set name="ThreadPool">
<New class="org.eclipse.jetty.util.thread.QueuedThreadPool">
<!-- specify a bounded queue -->
<Arg>
<New class="java.util.concurrent.ArrayBlockingQueue">
<Arg type="int">6000</Arg>
</New>
</Arg>
<Set name="minThreads">10</Set>
<Set name="maxThreads">200</Set>
<Set name="detailedDump">false</Set>
</New>
</Set>
</Configure>
And for exactly the reason quoted from the Wiki above, I need a bounded queue.
I'd rather return a 503 and keep the Service humming along nicely than overload it and destroy performance for all. And then, if possible, add more Servers or tune accordingly.
And I'd rather not run, say, Apache, in front of Jetty to give me a "Listen Backlog" because of the added complexity.
So is there another way I can accomplish this in Jetty7? (We unfortunately cannot move to Jetty9 yet)
Is the ArrayBlockingQueue truly more heinous than allowing my systems to get overloaded?
The ExecutorThreadPool — while available in Jetty7 — does appear to offer bounded job queueing.
But it also uses the problematic ArrayBlockingQueue.
So would you recommend that over the QueuedThreadPool, since both appear to have the same weakness?
Any other suggestions are greatly appreciated.
Thanks much,
-- Chris
BTW: Thanks for the pointers to the Mechanical Sympathy discussion. It looks fascinating.
I'll start with the JMX part of the question first.
Looks like the maxQueued attribute is not exposed.
As for the rest of the question...
We had too many issues with ArrayBlockingQueue, we removed support for maxQueued and the ArrayBlockingQueue from QueuedThreadPool starting in Jetty 9.
It started out as a performance question, then part of our mechanical sympathy efforts.
Eventually, some performance tests were run ...
And we determined that the ArrayBlockingQueue performs some harsh JDK locking that actually hurts performance.
So we removed it as part of our Queue/ThreadPool cleanup
However, we also made Jetty 9 use the built-in java.util.concurrent techniques.
Which allowed us to even expose that entire framework as an option for developers to use as the ThreadPool for jetty.
The wiki about high load doesn't mention the maxQueued configuration for QueuedThreadPool, as that's not related to high load in our minds.
It would be like trying to improve the performance of an F1 racer by changing the length of the spark plug wires.
|