private support for your internal/customer projects ... custom extensions and distributions ... versioned snapshots for indefinite support ... scalability guidance for your apps and Ajax/Comet projects ... development services from 1 day to full product delivery
If you have memory leaks, and you have thoroughly investigated tools like jconsole, yourkit, jprofiler, jvisualvm or any of the other profiling and analysis tools, and you can eliminate your code as the source of the problem, read the following sections about how to prevent memory leaks in your application.
This feature is available for Jetty 7.6.6 and later.
Code that keeps references to a webapp classloader can cause memory leaks. These leaks fall generally into two categories: static fields and daemon threads.
A static field is initialized with the value of the classloader, which happens to be a webapp classloader; as Jetty undeploys and redeploys the webapp, the static reference lives on, meaning garbage collecting cannot occur for the webapp classloader.
When Jetty starts as a daemon thread and is outside the lifecycle of the webapp, threads have references to the context classloader that created them, leading to a memory leak if that classloader belongs to a webapp. For a good discussion of the issue see Anatomy of a PermGen Memory Leak.
We provide a number of workaround classes that preemptively invoke the problematic code with the Jetty classloader, thereby ensuring the webapp classloader is not pinned. Be aware that since some of the problematic code creates threads, you should be selective about which preventers you enable, and use only those that are specific to your application.
Jetty includes the following preventers.
|Preventer Name||Problem Addressed|
|AppContextLeakPreventer||The call to |
|DOMLeakPreventer||DOM parsing can cause the webapp classloader to be
pinned, due to the static field |
|DriverManagerLeakPreventer||The number of threads dedicated to accepting incoming connections.|
|GCThreadLeakPreventer||Calls to |
|SecurityProviderLeakPreventer||Some security providers, such as
You can individually enable each preventer by adding an instance
to a Server with the
addBean(Object) call. Here's an
example of how to do it in code with the
You can add the equivalent in code to the
$JETTY_HOME/etc/jetty.xml file or any jetty xml file that
is configuring a Server instance. Be aware that if you have more than
one Server instance in your JVM, you should configure these preventers
on just one of them. Here's the example from code
put into xml:
The JSP engine in Jetty is Jasper. This was originally developed under the Apache Tomcat project, but over time many different project have forked it. All Jetty versions up to 6 used Apache-based Jasper exclusively, with Jetty 6 using Apache Jasper only for JSP 2.0. With the advent of JSP 2.1, Jetty 6 switched to using Jasper from Sun's Glassfish project, which is now the reference implementation.
All forks of Jasper suffer from a problem whereby using JSP tag files puts the permgen space under pressure. This is because of the classloading architecture of the JSP implementation. Each JSP file is effectively compiled and its class loaded in its own classloader to allow for hot replacement. Each JSP that contains references to a tag file compiles the tag if necessary and then loads it using its own classloader. If you have many JSPs that refer to the same tag file, the tag's class is loaded over and over again into permgen space, once for each JSP. See Glassfish bug 3963 and Apache bug 43878. The Apache Tomcat project has already closed this bug with status WON'T FIX, however the Glassfish folks still have the bug open and have scheduled it to be fixed. When the fix becomes available, the Jetty project will pick it up and incorporate into our release program.
This section describes garbage collection and direct ByteBuffer problems.
One symptom of a cluster of JVM related memory issues is the OOM
exception accompanied by a message such as
java.lang.OutOfMemoryError: requested xxxx bytes for xxx. Out of
Oracle bug 4697804 describes how this can happen in the scenario when the garbage collector needs to allocate a bit more space during its run and tries to resize the heap, but fails because the machine is out of swap space. One suggested work around is to ensure that the JVM never tries to resize the heap, by setting min heap size to max heap size:
Another workaround is to ensure you have configured sufficient swap space on your device to accommodate all programs you are running concurrently.
Exhausting native memory is another issue related to JVM bugs. The
symptoms to look out for are the process size growing, but heap use
remaining relatively constant. Both the JIT compiler and nio ByteBuffers
can consume native memory. Oracle
bug 6210541 discusses a still-unsolved problem whereby the JVM
itself allocates a direct ByteBuffer in some circumstances while the
system never garbage collects, effectively eating native memory. Guy
Korland's blog discusses this problem here
As the JIT compiler consumes native memory, the lack of available memory
may manifest itself in the JIT as OutOfMemory exceptions such as
Exception in thread "CompilerThread0" java.lang.OutOfMemoryError:
requested xxx bytes for ChunkPool::allocate. Out of swap
By default, Jetty allocates and manages its own pool of direct ByteBuffers for io if you configure the nio SelectChannelConnector. It also allocates MappedByteBuffers to memory-map static files via the DefaultServlet settings. However, you could be vulnerable to this JVM ByteBuffer allocation problem if you have disabled either of these options. For example, if you're on Windows, you may have disabled the use of memory-mapped buffers for the static file cache on the DefaultServlet to avoid the file-locking problem.