MQTT broker and client on the same RPI - python

So I'm building a system where I scan a RFID tag with a reader connected to a Raspberry Pi, the RFID tag ID should then be sent to another "central" RPI, where a database is checked for some info, and if it matches the central Pi sends a message to a lamp (also connected to a Pi) which will then turn on. This is just the start of a larger home-automation system.
I read about MQTT making it very easy to make more RPIs communicate and act on events like this. The only thing I am wondering about, but can't find documented on the internet, is whether the central Pi in my case can act like the broker, but also be subscribed to the topic for the RFID tag ID, check the database and then publish to another topic for the light.
Purely based on logical thinking I'd say yes, since the broker is running in the background. Thus I would still be able to run a python script that subscribes/publishes to, I'm guessing, localhost instead of the central Pi's IPaddress and port.
Can anyone confirm this? I can't test it myself yet because I have just ordered the equipment, and am doing lots of preparation-research.

You can run as many clients as you like on the same machine as the broker (You could even run multiple brokers as long as they listen on different ports). The only thing you need to do is ensure that each client has different client id

Related

Trying to get IOS Notifications using Bluetooth (RPI)

I converted my RPI into a hub for my room(it rests upon the wall and shows the current time, weather, what Music is playing(shairport-sync), and a spot for notifications.
From my personal experience, I've always noticed that bluetooth devices -- like smart watches and cars can read notifications. So I had an idea to connect my RPI to my iPhone and read the notifications coming from my phone. However though, I've made very little head away in implementing this.
So far, I have set up a separate machine that off hands all of the bluetooth stuff so the RPI has room to breathe. Other than that though I cannot figure out how to connect to iOS, subscribe to notifications, and retrieve them. I have looked into this topic extensively and have found many articles talking about ASCN, subscribing to certain Apple Services, using a MAP profile, and a whole lot of stuff about GATT Servers but these articles lack practical steps or any headers on where to get started.
I've been using various python libraries(pybluez, bleak, etc) in conjugation with BlueZ to try to connect and retrieve iOS Notifications using bluetooth. Many of the Articles and posts about this topic usually involves a BLE device communicating with an app in iOS which isn't what I need.
TLDR; : I need help/a point in the right direction to pairing and subscribing to bluetooth iOS notifications using Python.

How can I use MQTT long term in IoT Core?

So first of all, what I really want to achieve: I want to know when an IoT device has stopped working (i.e. lost connection, shut down, basically it's not longer talking to IoT Core). I can't seem to find an implementation for this on GCP.
I have a raspberry pi as my IoT device, I have configured it on IoT core and somewhere I read that since this is not implemented a way to solve it is to create a logging sink which activates a cloud function whenever there is a CONNECT/DISCONNECT log. This would serve my purpose and I have implemented this sink and cloud function to alert me.
I have been following this guide on connecting to MQTT. However, the way the explain it, they set it up such that whenever the expiration time on the JWT is exceeded, they disconnect the client and create a new one to re-new the JWT. This would make it such that I am going to be alerted of connection/disconnection whenever this client needs to be renewed. So I won't be able to differentiate of a real issue from renewals of the MQTT client.
In the same guide, I see that they mention MQTT long term or LTS, and they claim that this way you can set up the client once and communicate continuously through it for the supported time which it says its until 2030. This seems to be what I really want, but I have not been able to connect this way and they don't explain it other than saying the hostname should be mqtt.2030.ltsapis.goog and to use a primary and backup certificates which are different from the complete root CA from the first method.
I tried using basically the same process for setting up the client:
client = mqtt.Client(client_id=client_id)
# With Google Cloud IoT Core, the username field is ignored, and the
# password field is used to transmit a JWT to authorize the device.
client.username_pw_set(
username='unused',
password=create_jwt(project_id, private_key_file, algorithm))
# Enable SSL/TLS support.
client.tls_set(ca_certs=ca_certs, tls_version=ssl.PROTOCOL_TLSv1_2)
but changing the hostname and giving it the primary cert where I would give it the complete ca_certs, but it won't accept it and I am not sure how to do it otherwise with primary and backup certifications. I am looking at the documentation on tls_set, but I don't see where these would go or how they differ from the complete ca certs. I haven't seen any other examples outside of this guide.
I am hoping to be able to connect to this MQTT LTS so that I can maintain the connection without having to constantly renew the client.
The long term MQTT domain lets you use the LTS configuration for a long period of time, not the connection.
As you mention, for your use case the solution would be to activate and use device logs. One of the events is triggered when a device disconnects from IoT Core, and you can use that event to trigger an alert.
Keep in mind that the time limits for the connection are set for security purposes, and the client should renew the connection.

Raspberry Pi Bridge: I need to send a "fake" internet response to an Internet appliance

I have a Raspberry Pi already configured as a bridge and working just fine. The principle use is to monitor and capture traffic from a small internet appliance. That appliance is basically receiving radio transmission from a set of local sensors and then posting the values to an internet server. As a bridge, I'm "reading" the transmissions with a Python program as they pass through and processing some of the sensor data locally for my own use.
The problem is that appliance seems to need to have the Internet working to function correctly, ie it needs to think the internet is up and its messages are being received by the server. The other day we had a 10 hour Internet outage and I lost all of that data.
The Internet appliance does not seem to be waiting for an "I got it" response from the server for every transmission, but rather seems to need an occasional "I got it" or something from the server.
Thus, when the internet is down for an extended period, the appliance stops processing the local sensor signals, even though the sensors are still transmitting, and stops trying to post the information to the internet server. And, I stop getting data to process locally.
My idea is to try to construct a "fake message from the server" and just periodically send it to that Internet appliance, regardless of the state of the Internet. While I can see all of the traffic to and from the Internet appliance, I have no idea how to construct such a record nor how to send it to the appliance via Python.
Any thoughts? Thanks...RDK
Communication to the appliance (through the internet) may be done with the socket module. See https://docs.python.org/2/howto/sockets.html

Routing messages through a server in python

I want to implement the following but I'm not sure where to start / what to Google.
I'd appreciate some direction since I've never written any program that requires network connectivity and am pretty lost:
I've got 3 Raspberry Pis sitting around. I want 2 of them to be able to chat while the 3rd routes the messages (acts as a server between them).
The general flow of events should be something like this:
Server starts running on Pi #1
Pi #2 starts running and connects to the server (who's IP will be static I guess) with a name he chooses. Pi #3 does the same as #2.
Pi #3 can then, knowing the name of Pi #2, send a message to Pi #2 using: : .
This is the general outline of what I want to achieve.
I'm not sure what the server that runs on Pi #1 should be (I've heard of webserver frameworks like Flask but I don't have enough knowledge to determine if they fit my needs).
I'm also not sure on what I should be using for the client side (Pi #2,3). I could probably use sockets but I assume there is a better / easier way.
If you are on a private network, XML-RPC might be a good choice, because
It's built into Python, see this example
You can call remote functions almost as if they where local
Drawbacks:
Little network security
When sending raw data, it needs to be encoded (since to is a text protocol)
To check if your remote server is running, you can use sockets as in this example.

Clustering TCP servers, so can send data to all clients

Important note:
I've asked this question already on ServerFault: https://serverfault.com/questions/349065/clustering-tcp-servers-so-can-send-data-to-all-clients, but I'd also like a programmers perspective on the problem.
I'm developing a real-time mobile app by setting up a TCP connection between the app and server backend. Each user can send messages to all other users.
(I'm making the TCP server in Python with Twisted, am creating my own 'protocol' for communication between the app/backend and hosting it on Amazon Web Services.)
Currently I'm trying to make the backend scalable (and reliable). As far as I can tell, the system could cope with more users by upgrading to a bigger server (which could become rather limiting), or by adding new servers in a cluster configuration - i.e. having several servers sitting behind a load balancer, probably with 1 database they all access.
I have sketched out the rough architecture of this:
However what if the Red user sends a message to all other connected users? Red's server has a TCP connection with Red, but not with Green.
I can think of a one way to deal with this problem:
Each server could have an open TCP (or SSL) connection with each other server. When one server wants to send a message to all users it simply passes this along it's connection to the other servers. A record could be kept in the database of which servers are online (and their IP address), and one of the servers could be a boss - i.e. decides if others are up and running, if not it could remove them from the database (if a server was up and lost it's connection to the boss it could check the database and see if it had been removed, and restart if it had - else it could assume the boss was down.)
Clearly this needs refinement but shows the general principle.
Alternatively I'm not sure if this is possible (- definitely seems like wishful thinking on my part):
Perhaps users could just connect to a box or router, and all servers could message all users through it?
If you know how to cluster TCP servers effectively, or a design pattern that provides a solution, or have any comments at all, then I would be very grateful. Thank you :-)
You need to decide (or if you already did this - to share these decisions with us) reliability requirements for your system: should all messages be sent to all users in any case (e.g. one or more servers crashed), can you tolerate sending the same message twice to the same user on server crash? Your system complexity depends directly on these decisions.
The simplest version is when a message is not delivered to all users on server crash. All your servers keep TCP connection to each other. One of them receives a message from a user and sends it to all other connected users (to this server) and to all other connected servers. Other servers send this message to all their users. To scale the system you just run additional server which connects to all existing servers.
Have a look how it is handled with IRC servers. They essentially can do this already. Everbody can send to everybody else, on all servers. Or just to single users, also on another server. And to groups, called "channels". It works best by routing amongst the servers.
It's not that hard, if you can make sure the servers know each other and can talk to each other.
On a side note: At 9/11, the most reliable internet news source was the IRC network. All the www sites were down because of bandwidth; it took them ages to even get a plain-text web page back up. During this time, IRC networks were able to provide near real-time, moderated news channels across the atlantic. You maybe could no longer log into a server on the other side, but at least the servers were able to keep up a server-to-server connection across.
An obvious choice is to use the DB as a clearinghouse for messages. You have to store incoming messages somewhere anyway, lest they be lost if a server suddenly crashes. Put incoming messages into the central database and have notification processes on the TCP servers grab the messages and send them to the correct users.
TCP server cannot be clustered, the snapshot you put here is a classic HTTP server example.
Since the device will send TCP connection to server, say, pure socket, there will be noway of establishing a load-balancing server.

Categories