Thx a lot for this lengthy reply. I agree completely that the terminology is wrong. The api indeed supports QoS 2 as per the mqtt spec. Clearing out the grey area will certainly help developers and avoid confusion.
"Currently one has to implement his own persistence mechanism in
order to get true QoS 2 so looking forward to that one."
I only disagree in the terminology of "true QoS 2". I don't wish
it to be known that these APIs don't implement QoS 2 properly. Once
the message is accepted by the API, then you have "true QoS 2". It
is up to the API to decide whether or not to accept the message, and
unfortunately in the synchronous Java case, that is currently not
very clear. We have to fix that.
The reason that the IBM originated APIs do not currently allow the
publishing of messages when not connected is that we have many
people who think that the API must persist the messages to disk,
otherwise it is not implementing "true" QoS 1 or 2. A consequence
is that if you may have up to 64k messages stored on disk per MQTT
client. And in these supposedly simple APIs, these is the concern
that we would have to add a complicated set of API calls to manage
that data. When we build messaging systems, they are intended to
move the messages on as quickly as possible, they do not make
efficient databases.
However there are MQTT APIs originated from outside IBM that do not
have disk persistence, but do allow publishing while not connected.
From some people's point of view, these do not implement QoS 1 or 2
correctly.
I don't subscribe to the view that either of these models is
incorrect. They will behave differently under various circumstances
though, some of which I tried to capture in this blog post:
http://modelbasedtesting.co.uk/?p=39 (where persistence means to
"permanent" storage, like disk).
When we add "offline buffering" in Paho 1.2 to the Java and C
clients, we'll be thinking of what API calls are needed to manage
that stored data too.
Message Rejection
It depends what action the server is taking when rejecting the
message. Due to the current limitations of MQTT, there is no way to
"nack" a publish. There are only two options:
1) carry on with the message exchange
2) disconnect the client
if the server doesn't respond to the incoming message, then it will
be stacked up in the client. If the number of messages stacked up
hits the inflight limit, you won't be able to send any more
messages.
I disagreed with the policy of having any inflight message limit
(except for the MQTT limit of 64k) for the asynchronous client in
that you can implement one yourself in the application. This is
what the C client does. In any case, we will soon allow you to set
the limit for the Java client to whatever value you want (up to
64k).
Ian
On 01/26/2015 01:32 PM, Davy De Waele
wrote:
The buffer capability would be a very welcome
feature ... Currently one has to implement his own persistence
mechanism in order to get true QoS 2 so looking forward to that
one.
While we're on the topic of in-flight messages, during
testing I was in a situation where my broker was "throttling"
(read: rejecting)" all messages to a particular topic :
/var/log/activemq/activemq.log.3:2015-01-26 12:14:40,202 [Q
NIO Worker 80] INFO Topic -
topic://testTopic, Usage Manager memory limit reached 1048576.
Producers will be throttled to the rate at which messages are
removed from this destination to prevent flooding it. See http://activemq.apache.org/producer-flow-control.html
for more info.
This resulted in Paho being unable to clear the
current 10 inflight-messages for that topic. As a result
Paho couldn't publish anything anymore. (not even to
different topics).
Perhaps it would be interesting to maintain a max
number of in-flight messages on a per topic basis. That way
one topic that is "mis-behaving" would not bring down the
whole system. But perhaps the buffer will solve that
problem.
On Mon, Jan 26, 2015 at 2:16 PM, Dave
Locke <locke@xxxxxxxxxx>
wrote:
and one other enhancement that is on the
wish list is to add a buffer capability that can accept
and store messages
when connectivity is not available but would also handle
the case where
the "in-flight" window is maxed out,.
1) You can set a callback on the publish method, which
will be called when
each publish has completed. This way you can count
how many messages
are in flight and delay the call to publish.
2) We also have an enhancement currently under review
which allows the
application to set the maximum number of inflight
messages allowed.
I hadn't got to reviewing it yet -- I hope to do so
this week or next.
Ian
On 01/26/2015 11:07 AM, Davy De Waele
wrote:
Hi,
I started implementing a solution based
on the java MqttAsyncClient
since I needed to obtain the IMqttToken for message
bookkeeping due to
some limitiations with the syncClient.
However I noticed the following :
When sending a substantial number of msgs
in a short timestamp,
the risk of running into the max inflight messages is
very high (writing
a loop that sends 20 msgs to a topic results in "Too
many publishes
in progress (32202)" about half of the time.
This is not the case with the syncClient
because you allow
for some time for the message to get sent before
moving on to the next
one.
How should we handle this ? Our broker is
perfectly capable
of handling this type of load, but here it seems that
the client is the
bottleneck.
Should the asyncClient always be used by
specifying a
waitForCompletion on the mqttDeliveryToken to avoid
having too
much inflight messages in the client ?
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and
Wales with number
741598.
Registered office: PO Box 41, North Harbour, Portsmouth,
Hampshire PO6
3AU