Things to check ...
* Are you using 100% of the CPU? (common bottleneck)
* Are you using 100% of the network capacity? (common bottleneck)
* Are you testing with multiple http clients (potentially on different machines on the same network)?
* Have you tweaked any defaults? (leave everything at default, test, test again, get numbers, THEN change the defaults to suit your needs, and keep testing and gathering numbers even in production so you can anticipate changes)
* Are you logging your GC behavior/activity? (this should be done to see if your performance hit is from excessive memory usage and bad GC behavior)
Considering that you only hit 11,000 requests a second on a black endpoint I think you don't have enough clients performing your testing.
Localhost testing (to avoid bottleneck that network testing introduces) on a laptop from 2012 can hit 80,000 requests a second doing testing against a jetty instance running cometd with configured endpoints.
We have a large range of users that can easily saturate their network with Jetty (can't go any faster then what the network can handle).
From Original HTC G1 Android device (500Mhz CPU with 128MB of RAM) serving over wifi at 54Mbit/s
Through an Old Raspberry Pi's (1.6Ghz quad core ARM CPU on 1GB of RAM) serving over usb/wifi at 110Mbit/s
To commodity hardware intel i5 CPUs with 8GB of RAM serving at 1Gbit/s speeds
To modern hardware 64 core machines with 128GB of RAM serving at 10Gbit/s speeds
All the way to specialty Compute Engines with 200+ cores and 1TB of RAM serving at Gen 6/7 FibreChannel speeds of 50Gbit/s (not sure if this is saturation or not, but it's darned fast)
People use Jetty for a large and wide spectrum uses, from the small and barely used to large production installations at Fortune 500 companies and even to places where even we are surprised (NASA, high speed trading groups, various well respected international research facilities, dynamic testing infrastructures, etc)