Thanks, I’ll check out the development branch and keep an eye on the progress of that ticket. And I definitely can see why the client library is not designed to handle that kind of scenario. It’s implied that it does, but that also is a hard engineering problem and it makes the client a lot more complicated.
I will look into the client/broker disconnection hooks. If that’s reliable enough under the scenarios that I’m testing then I can write my own logic for storing and re-sending messages outside of the client.
Thanks again for the insight into the original design goals of the library, that helps clear up a lot of questions I had.
as Manuel says, in the development branch there is a method to allow
you to configure the maximum number messages inflight.
This might seem strange, but this wasn't designed to allow
applications to publish messages when not connected to a broker
(you can't, at the moment). It was only intended to store the state
of inflight QoS 1 and 2 messages so that the MQTT protocol exchanges
could be completed when the application reconnects.
Some reasons why publishing while not connected was thought a bad
idea at the time:
- a lot of messages could be stored in the queue, when this is
not the best place to store them (it's not a database)
- we'd have to add methods to administer the queued messages
(query, delete, etc)
- if the application never successfully connected, queued
messages would never be sent
However, many people have come to expect the function, regardless of
the drawbacks, so it is now high on our to-do list, in the C clients
as well as Java. I have opened a bug
but this is for all clients. I should open one for each so it's
more obvious.
Ian
On 09/23/2015 09:09 AM, Manuel
Domínguez Dorado wrote:
Owen,
you can change this value by calling getMaxInflight() of
MqttConnectOptions. Also, you can control the number of messages
that are currently inflight vía a counter that is increased when
publish() is called and decreased when deliveryComplete()
callback is executed. Using this counter you can avoid the REASON_CODE_MAX_INFLIGHT exception.
Hi, I was doing some
testing with the Paho java client library and when I sent
a lot of messages (especially at QoS 1 which requires an
ack) I started getting exceptions thrown for REASON_CODE_MAX_INFLIGHT. It looks like the default is hardcoded to be 10 here:
My shallow understanding of how the client works is that if you have a persistent message, it’s stored (in memory or disk) but it also seems to be checking this limit first, which seems to mean that the maximum number of stored/buffered messages is 10, no matter what persistent backing store you are using?
I could hack the client library code to fix that limit, but it still seems very low to me. The use case is a device where the network might be down for minutes or hours at a time, and the client persistence feature is great and all, but that’s really not much of a buffer. I feel like that’s actually a bug.
I could patch the code, but maybe it should be configurable? Or removed entirely if you’re using file based persistence?