There's no such thing as a no buffering send over
tcp/ip.
Your app buffers.
The java layer buffers.
The OS network layer buffers.
The network itself buffers.
The various HTTP intermediaries buffer.
The network hardware between you and the remote buffers.
The remote side also has its buffers.
Any one of those can prevent the remote side app from seeing
the data you want in the time frame that you want.
This buffering can be further exacerbated by traffic muxing,
traffic aggregation, compression, etc...
If the timeliness of the data is important, you'll be better
off using UDP, as that will reduce the number of points where
buffering can occur (but not eliminate it!)
But then you are on the hook for out of order packets, dropped
packets resend, etc.. (pretty much what the TCP layer is
doing)
(This is how most network gaming works, video conferencing,
live streaming, webrtc, etc...)
If you push enough data before your flush() to consume those
various buffers then you can, in a round about way, force the data
through.
However, if any layer has congestion, then you suddenly are
blocking again.
Timeliness and SSE are at odds with each other, mainly due to
the fact that you are dealing with HTTP and all that it brings to
the table.
If timeliness is not a requirement, then stick with SSE and
all of the buffering that exists.
You should seriously consider cometd, as it will not send if
the specific endpoint is congested, queuing up those messages until
the congestion abates some.
You can even use the cometd features for message timeout (the
message expires and is old after x ms) and message ack
(confirmation that the remote endpoint got the specific message.
useful for unreliable clients on unreliable networks.
wifi/mobile).
To change your delivery options, retrieve your password, or unsubscribe from this list, visit