Note that the HttpChannel.write() API is really just a facade over HttpTransport methods:
void send(HttpGenerator.ResponseInfo info, ByteBuffer content, boolean lastContent, Callback callback);
which are implemented either by the HTTP connection or the SPDY connection. I have modified the HTTPConnection so that these asynchronous methods simply wrap the callback and call asynchronously all the way down to the AbstractChannel WriteFlusher. The SPDY implementation needs to be updated here as it may still be simulating async with some blocking implementations of these calls.
Similarly HttpInput now has a state machine to control blocking or asynchronous writes bases around state classes with behaviour rather than just an enum:
protected static class State
{
public void waitForContent(HttpInput<?> in) throws IOException
{
}
public int noContent() throws IOException
{
return -1;
}
public boolean isEOF()
{
return false;
}
}
protected static final State BLOCKING= new State()
protected static final State ASYNC= new State()
protected static final State EARLY_EOF= new State()
protected static final State EOF= new State()
This is intended to eventually be a lock free implementation, but it currently uses synchronise to maintain compatibility with the content queuing implementation. The content queue which used to be part of the base HttpInput has now been moved to a QueuedHttpInput derivation that is used by test harnesses and SPDY. This implementation extends blockForContent to wait on the queue and also uses any calls to add content to either wake up a blocked reader or call channel.onReadPossible() for asynchronous readers.
This queued implementation (which still could do with some modernisation) works fine for SPDY where different threads do the protocol parsing and request handling. The calls from the protocol thread to content, eof etc. are sufficient to drive the state machine and back pressure from slow readers is achieve via protocol mechanisms that monitor the size of the content queue.
However, for HTTP, a different solution is required because of HTTP pipelining, all reading/parsing of IO bytes is suspended while the request is dispatched, until such time as a read is called. HTTP cannot read ahead else there is no TCP/IP back pressure exerted by a slow reader. Thus there is a separated HttpInputOverHTTP class that provides an alternative non queuing implementation.
With HttpInput, all calls to read, isReady() and available() essentially call nextContent(). In the queuing application this is implemented to pop the next content off the queue, but in the HTTP impl, this actually does a IO read and parse looking for a call to content(ByteBuffer). If there is no content, then it calls _httpConnection.fillInterested(Callback) to wait for ome IO activity before nextContent is called again. If it is a blocking read, then the fill interest called like:
_httpConnection.fillInterested(_readBlocker);
_readBlocker.block();
otherwise for async reads it is called from unready() with the input as a callback and which has:
public void succeeded()
{
_httpConnection.getHttpChannel().getState().onReadPossible();
}
that triggers the async lifecycle.
So the final piece of this puzzle is the HttpConnection.fillInterested(Callback call). This was not in 9.0 but is derived from the AbstractConnection.block(BlockingCallback callback) method which was used to do blocking reads for HTTP.
AbstractConnection in Jetty-9 has a state machine that tracks dispatches to onFillable() together with calls for interest either from fillInterested() or block(BlockingCallback).
private enum State
{
IDLE, INTERESTED, FILLING, FILLING_INTERESTED, FILLING_BLOCKED, BLOCKED, FILLING_BLOCKED_INTERESTED, BLOCKED_INTERESTED
}
The complexity here is that the interest can be registered from within or outside of a FILLING state (a call to onFillable) and if we are already calling onFillable() then we cannot register for that additional interest and queue it up until the return.
In 9.1, we have simplified this somewhat by having a lock free State class bases state machine that implements the main onFillable states:
public static final State IDLE=new State("IDLE")
public static final State FILL_INTERESTED=new State("FILL_INTERESTED")
public static final State FILLING=new State("FILLING")
public static final State FILLING_FILL_INTERESTED=new State("FILLING_FILL_INTERESTED")
The in addition to that statemachine, a call to fillInterested(Callback) will establish a nested state machine with the normal state machine wrapped in a FillingInterestedCallback state that holds the passed callback. The wrapped statemachine can still progress (eg if the onFillable call returns it may move from FILLING to IDLE). While the state machine is wrapped, it has it's own callback registered with connection.getEndPoint().fillInterested(callback); to process IO read interest. Only when that succeeds or fails will the nested state machine be unwrapped and normal processing continue.
This approach is still a little complex and creates a little more garbage than desirable, but as it is for only called when input is not available, this is not a prime use-case for most request handling.
While making these changes work, there were a few other cleanups that went through the code. Primarily in the HttpConnection.onFillable implementation that still had a few too many special cases for closing. These have been removed and replaced by a policy of always sending a BadMessage event if a connection closes during a request header. This is a simpler approach but is currently producing a few unwarranted exceptions when a connection closes before ever sending a request (mostly a problem for test harnesses).
cheers