[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
Re: [jetty-users] DirectByteBuffer native memory problems (OOM) with Jetty 9.0/9.1
|
Hi,
On Thu, Nov 21, 2013 at 2:47 PM, Emmeran Seehuber <rototor@xxxxxxxxxx> wrote:
> Hello everybody,
>
> I´m using jetty as standalone embedded web server behind a nginx and it is
> working great so far. Recently I upgraded from 7.6.4 to 9.0 and two days ago
> to 9.1.0.v20131115.
>
> Since my upgrade to 9.0 i experience native memory leaks. Also with 9.1 i
> still have the same leaks. Native memory leaks mean, that the process memory
> size grows very large and then I get OOM Exceptions for DirectByteBuffers.
> But to get there the server process has to run for about two weeks.
>
> Since the process has a 7 GB heap, the MaxDirectMemory Limit also seems to
> be 7 GB.
>
> First i thought that the many threads started/stopped in the thread pool are
> the problem. I specified a idle time of -1, and still many threads are
> restarted. Also some answers on the web about DirectByteBuffer OOM
> Exceptions pointed into this direction. But i don’t think thats the problem.
>
> I think the problem may be in ArrayByteBufferPool. It allocates an unbound
> number of ByteBuffers and caches them in a unbound ConcurrentLinkedQueue. So
> at some peak times many buffers are used and never freed. On the other side
> this doesn’t really fit with the allocated memory. At the moment the server
> has this direct memory allocations (according to the MBeans info):
>
> java.nio
> BufferPool
> name=direct
> - Count 384
> - MemoryUsed 2728821147
> - Name direct
> - ObjectName java.nio:type=BufferPool,name=direct
> - TotalCapacity 2728821147
>
> That would mean an medium buffer size of about 7 MB. But the
> ArrayByteBufferPool does, as far as i understand, only allocate a max of 64
> kb sized buffers by default ?!
>
> Current threads are: Count = 230, Maximum = 233, Started overall = 599
>
> This numbers are after the process is running for about 32 hours. I´ve got
> about 320 request/minute at peak times. All this numbers are according to
> the embedded JavaMelody monitoring.
>
> The JVM arguments are:
> -server -XX:+UseCompressedOops -XX:MaxPermSize=512m -Xms7000M -Xmx7000M
> -XX:+UseParallelOldGC -XX:+DoEscapeAnalysis -XX:+OptimizeStringConcat
> JDK 1.7.0_45
>
> The thread pool is configured this way:
>
> tp = new QueuedThreadPool();
> tp.setMaxThreads(256);
> tp.setMinThreads(5);
> tp.setIdleTimeout(-1);
> tp.setName("Http Server Thread Pool");
> tp.setDaemon(true);
>
> I had no such problems with jetty 7.x. Regularly calling System.gc() every
> hour does not help. Also the i´ve got no other GC problems.
>
> Any ideas what could cause this problems?
> Why are threads restarted in the pool even when i specify a idle timeout of
> -1? I also regulary schedule some own runnables on the thread pool - but
> they should not cause this problems, should they? They do in 99.99% of all
> cases not throw any exceptions.
Seems like a bug.
I would keep the ThreadPool at default configuration for now, to have
less variables in the system.
Would you be able to replace usage of ArrayByteBufferPool with
MappedByteBufferPool and see if the problem persist ?
Also, please file an issue about this, it really looks from JMX that
there is a problem.
Thanks !
--
Simone Bordet
----
http://cometd.org
http://webtide.com
http://intalio.com
Developer advice, training, services and support
from the Jetty & CometD experts.
Intalio, the modern way to build business applications.