private support for your internal/customer projects ... custom extensions and distributions ... versioned snapshots for indefinite support ... scalability guidance for your apps and Ajax/Comet projects ... development services from 1 day to full product delivery
Configuring Jetty for high load, whether for load testing or for production, requires that the operating system, the JVM, Jetty, the application, the network and the load generation all be tuned.
The load generation machines must have their OS, JVM, etc., tuned just as much as the server machines.
The load generation should not be over the local network on the server machine, as this has unrealistic performance and latency as well as different packet sizes and transport characteristics.
The load generator should generate a realistic load:
A common mistake is that load generators often open relatively few connections that are extremely busy sending as many requests as possible over each connection. This causes the measured throughput to be limited by request latency (see Lies, Damned Lies and Benchmarks for an analysis of such an issue).
Another common mistake is to use TCP/IP for a single request, and to open many, many short-lived connections. This often results in accept queues filling and limitations due to file descriptor and/or port starvation.
A load generator should model the traffic profile from the normal clients of the server. For browsers, this is often between two and six connections that are mostly idle and that are used in sporadic bursts with read times in between. The connections are typically long held HTTP/1.1 connections.
Load generators should be written in asynchronous programming style, so that a limited number of threads does not restrict the maximum number of users that can be simulated. If the generator is not asynchronous, a thread pool of 2000 may only be able to simulate 500 or fewer users. The Jetty HttpClient is an ideal choice for building a load generator, as it is asynchronous and can simulate many thousands of connections (see the Cometd Load Tester for a good example of a realistic load generator).
Both the server machine and any load generating machines need to be tuned to support many TCP/IP connections and high throughput.
Linux does a reasonable job of self-configuring TCP/IP, but there
are a few limits and defaults that you should increase. You can
configure most of them in
You should increase TCP buffer sizes to at least 16MB for 10G paths and tune the autotuning (although you now need to consider buffer bloat).
$ sysctl -w net.core.rmem_max=16777216 $ sysctl -w net.core.wmem_max=16777216 $ sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216" $ sysctl -w net.ipv4.tcp_wmem="4096 16384 16777216"
net.core.somaxconn controls the size of the
connection listening queue. The default value is 128; if you are
running a high-volume server and connections are getting refused at a
TCP level, you need to increase this. This is a very tweakable setting
in such a case: if you set it too high, resource problems occur as it
tries to notify a server of a large number of connections, and many
remain pending, but if you set it too low, refused connections
$ sysctl -w net.core.somaxconn=4096
net.core.netdev_max_backlog controls the size
of the incoming packet queue for upper-layer (java) processing. The
default (2048) may be increased and other related parameters (TODO
MORE EXPLANATION) adjusted with:
$ sysctl -w net.core.netdev_max_backlog=16384 $ sysctl -w net.ipv4.tcp_max_syn_backlog=8192 $ sysctl -w net.ipv4.tcp_syncookies=1
If many outgoing connections are made (for example, on load generators), the operating system might run low on ports. Thus it is best to increase the port range, and allow reuse of sockets in TIME_WAIT:
$ sysctl -w net.ipv4.ip_local_port_range="1024 65535" $ sysctl -w net.ipv4.tcp_tw_recycle=1
Busy servers and load generators may run out of file
descriptors as the system defaults are normally low. These can be
increased for a specific user in
theusername hard nofile 40000 theusername soft nofile 40000
Linux supports pluggable congestion control algorithms. To get a list of congestion control algorithms that are available in your kernel run:
$ sysctl net.ipv4.tcp_available_congestion_control
If cubic and/or htcp are not listed, you need to research the control algorithms for your kernel. You can try setting the control to cubic with:
$ sysctl -w net.ipv4.tcp_congestion_control=cubic
Intermediaries such as nginx can use a non-persistent HTTP/1.0 connection. Make sure to use persistent HTTP/1.1 connections.
Tune the Garbage Collection
Allocate sufficient memory
Use the -server option
The standard rule of thumb for the number of Accepters to configure is one per CPU on a given machine.