My publisher and broker are working on different systems. I am using QOS=2 for delivery of messages.I am using python paho mqtt broker. It is further extension to
MQTT - Is there a way to check if the client is still connected
1) When I publish a message to connected broker, it acknowledge me by calling on_publish() callback. but when I disconnected my broker running on different machine from the network, the publisher stores the publish message on local machine and again when I connect my broker to network it publishes all previous messages to broker. I think these messages stored as inflight messages (not confirmed), if these messages are inflight messages, then where these messages are stored,is there any limit of these inflight messages as I have not include anything in my code regarding inflight messages.
2)In first case I disconnect my broker from the network, now I shutdown my broker and again connect to network, my program call on_disconnect() callback says unexpected disconnection, then publishes all unpublished messages to broker.
a) I am confused why on_disconnect() callback calls only when I shut down by broker not on when I disconnect my broker from my local network.
b) on_disconnect() method calls only when my broker again connect to network.
Is there any way if my broker disconnect suddenly, then immediately it will be informed to publisher?
3) I am using mqtt in real time gps tracking, I want to store messages to local db when publisher not connected to broker , but I can't find any way how publisher comes to know immediately when it disconnected from broker.
4) Is using QOS=2 is best way to ensure message delivery to broker or using local db for storage purpose when disconnected to broker and then sync automatically with local db to publish all messages
Related
Here is the architecture topology:
An IoT device that counts people and saves the data to its cloud platform. Data can be accessed via an API and more specific it requires to provide a webserver endpoint where it can push the data every minute or so. This a ready-made product that I cannot change the data transfer method.
A webserver on my side that receives and stores the data.
As I am new to WebSockets, I interpret the above configuration as a WebSocket server installed on my webserver and wait for the data to be received from the IoT server (client).
So I deployed a Linux server in digitalocean and enabled the websocket server to wait for the incoming connections. The code I used for the server is:
import asyncio
import websockets
async def echo(websocket, path):
async for message in websocket:
print(message)
start_server = websockets.serve(echo, "MYSERVERIP", 80)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
All I need at this stage is to print all JSON packets that are pushed from the IoT server.
When I try to set the endpoint address in the IoT server, it refuses to accept ws://Myserver:80 and only accepts HTTP://Myserver:80. Obviously I don't have any HTTP server running on my server and therefore I am guessing the connection is refused from my server.
Also, the IoT API requires token X-Auth-token authentication. I am using the WebSockets python library but I didn't set up the authentication on my server. I left it null on both IoT server API and my server.
If I was to add a token authentication, what would be parameters or arguments required for the websocket server? I tried to search the websockets docs but with no luck.
This is not for production environment!! I am only trying to learn.
Any thoughts are welcome.
So these are the requirements:
An IoT device that counts people and saves the data to its cloud
platform. Data can be accessed via an API and more specific it
requires to provide a webserver endpoint where it can push the data
every minute or so.
A webserver on my side that receives and stores
the data.
They need data to be refresh every minute or so. In my humble opinion, websockets are neccesary only on real time.
That said, my proposed solution is to use a Message Broker instead. I think it's easier to handle than websockets directly, and you do not have to care about maintaining a live socket connection all the time (which is not efficient in terms of energy in IoT world).
In other words, use a Pub/Sub architecture instead. Your IoT devices publish data to the Message Broker (common one is RabbitMQ), and then you build a server that subscribes to the broker, consuming its data and stores it.
Now, every device connects to the cloud only when it has data available, this saves energy. The protocol may be MQTT or HTTP, MQTT is often used in the IoT world.
Related: Pub-sub messaging benefits
I am using Python based paho mqtt client to publish data to mosquitto mqtt broker.
Let's assume a scenario, when client wanted to publish message and broker got disconnected.
So python based client object buffers that message in _out_message (Ordered Dictionary), and keep retrying to send messages.
I wanted to know,
For how long mqtt client will buffer such message?
Is there any time limit or retry limit after which client will drop the message.
I wanted to dump/log such messages.
According to Eclipse Paho Python documentation, you can set the maximum number of outgoing messages with Quality of Service greater than 0 (QoS > 0) that can be pending in the outgoing message queue with the method:
max_queued_messages_set(self, queue_size)
It seems that using the default value (0) all the messages are kept until the MQTT client is able to send them. So, in the end, I suppose that messages are kept until the Python process reach the memory limit imposed by the operating system.
You can force the MQTT client to discard messages using the method reinitialise.
reinitialise(client_id="", clean_session=True, userdata=None)
How can CLIENT continuously receive data from SERVER? I think my sequence diagram is complex solution. I just want Client set connect with Server 1 time then Server continuously send data to Client anyway. Here I use API-Restful + Mosquitto
MQTT can run over a Websocket connection so it is possible to subscribe directly to the MQTT broker from within a webpage. This would remove the need for any REST calls.
The Paho Javascript client supports MQTT of Websockets.
The broker will need to be configured to support MQTT over Websockets on a separate port from the normal native MQTT support. Details of how to set up Mosquitto to do this can be found in the man page
Currently developing something like "smart home" and I have few different devices in my home. All of them connected to OpenHab via MQTT. I'm using Paho MQTT library (Python) for my purposes.
Generally, MQTT has "keepalive" property. This property describes how much time my client will be connected (AFAIK it sends the ping to the server) to MQTT server when there are no updates on the subscribed topic.
But here I have a huge problem. Needed topic could be updated once per hour or even once per few days/months. Let's say that this is indoor alarm.
How can I avoid that keepalive timeout or ignore that field? Could it be unlimited?
You have miss understood what the keepalive value represents.
MQTT clients can stay connected indefinitely even if they do not publish or receive any messages. But the broker needs to keep track of which clients are still connected so it knows when to send the Last Will and Testament (LWT) message for the client. In order to do this it uses the keepalive time.
Every time a message is sent or received by the client, the broker resets a timer, if this timer exceeds 1.5 times the value of the keepalive time then the broker marks the client as disconnected and processes the LWT. To prevent clients with very low messages rates from being disconnected, such a client can send a PINGREQ packet at any time (most likely on timeout of the keepalive value) to the server/broker. The server receives the PINGREQ, answers with a PINGRESP packet and it will reset the keepalive timer to zero and leave the client in the connected state.
See Keep Alive section of the MQTT standard: (http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc385349238)
The Client can send PINGREQ at any time, irrespective of the Keep Alive value, and use the PINGRESP to determine that the network and the Server are working. If the Keep Alive value is non-zero and the Server does not receive a Control Packet from the Client within one and a half times the Keep Alive time period, it MUST disconnect the Network Connection to the Client as if the network had failed
When sending the initial MQTT CONNECT message from a client, you can supply an optional "keep-alive" value. This value is a time interval, measured in seconds, during which the broker expects a client to send a message, such as a PUBLISH message. If no message is sent from the client to the broker during the interval, the broker automatically closes the connection. Note that the keep-alive value you specify is multiplied by 1.5, so setting a 10-minute keep-alive actually results in a 15 minute interval.
Have a look at the Keep Alive section of the MQTT specification:
A Keep Alive value of 0 has the effect of turning off the Keep Alive mechanism. If Keep Alive is 0 the Client is not obliged to send MQTT Control Packets on any particular schedule. v5 spec source
Therefore, set the keep alive to 0, and then the client doesn't have to send a keep alive signal as often. The server should respect that this connection with a client (e.g. from last year) should still be connected, but it won't be guaranteed (The client might be disconnected when the server is shut down).
I am studying the performance of MQTT protocol. I am using Raspberry Pi as the MQTT broker, and a PC as a client both connected in the same LAN. The PC sends a message to the broker, and when the broker receives it then it publishes back a publish.single. When I try to send 10,000 publish message per minute with qos=2, I get the following error message at the client side after ~8163 messages:
error: [Errno 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted
I tried the same for qos=0 and qos=1, it worked without getting the same error. What's the problem?
This most likely because you have exhausted the number available local ports on the client machine because you have so many messages inflight.
QOS 2 messages have a lot more overhead (they require confirmation in both directions).
It's possibly being made worse by using the publish.single method because this will be creating and tearing down a full connection to the broker for each message, if you create a persistent connection and reuse it things will probably flow better.