Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[jetty-users] Flow control in AsyncMiddleManServlet


I'm using AsyncMiddleManServlet to proxy plain HTTP clients to an upsteam SSL server and wanted to ask your help in gaining an understanding of how flow control works or what configuration parameters influence it.  

clients are sending plain HTTP 1.1 to the proxy
the proxy is then doing SSL encryption/decryption to the upstream server (also HTTP/1.1)

typically the local clients are near the proxy (e.g. on a LAN) and there is throtlled bandwidth / latency to the upstream server.

what I seem to be seeing (anecdotally) is that on high load the cost of the SSL encryption to the upstream is eventually max-ing out the available CPU and causing clients to timeout.  which is of course to be expected at some level of load.

to try to zone in on the above I'm just trying to understand the basic flow control workflow.  lets say I had (say) 1000 clients all sending large HTTP PUTs to the proxy concurrently with large files (10MB, say).

my assumptions are as follows (please correct any that are wrong as that would be a big help):
* a single socket read of $chunk_size will be performed on each client connection to the extent of the max read size.  (which seems to be 4K by default but correct me if I'm wrong)
* each read from above would  then have to be written to the upstream connection (which in my case will have SSL enabled) on a one-to-one basis i.e. one read of $chunk_size will have to be then written to the upstream /before/ the next read can happen on the client socket
* /OR/ is it the case that many reads can happen on the client socket which are then buffered / queued in the proxy meaning a fast sender and slow upstream can end up with bloat at this level?  if this is the case - then which configuration parameters will infleuence this?
* in the case of these reads/writes it looks like jetty has a pool of threads but I assume the thread is only occupied for the length of time the I/O operation takes and is not in any sense blocked on one connection.  so even if there was only one thread for handling upstream connection  it could still handle (say) 100 client connections by polling for reads/writes as something like libevent would do in the C world.

The above is just a basic sketch to give a rough idea of my current crude/flawed mental model and hopefully serve as a way for people with knowledge to fill in some blanks for me.  and it may be that some of my assumptions are way off base so please correct me where I'm wrong.  like I said - I'm just trying to get a broad brush understanding to help with debugging.

All feedback welcome/appreciated.


Back to the top