I am implementing a small distributed system (in Python) with nodes behind firewalls. What is the easiest way to pass messages between the nodes under the following restrictions:
I don't want to open any ports or punch holes in the firewall
Also, I don't want to export/forward any internal ports outside my network
Time delay less than, say 5 minutes, is acceptable, but closer to real time would be nice, if possible.
1+2 → I need to use a third party, accessible by all my nodes. From this follows, that I probably also want to use encryption
Solutions considered:
Email - by setting up separate or a shared free email accounts (e.g. Gmail) which each client connects to using IMAP/SMTP
Google docs - using a shared online spreadsheet (e.g. Google docs) and some python library for accessing/changing cells using a polling mechanism
XMPP using connections to a third party server
IRC
Renting a cheap 5$ VPS and setting up a Zero-MQ publish-subscribe node (or any other protocol) forwarded over SSH and having all nodes connect to it
Are there any other public (free) accessible message queues available (or platforms that can be misused as a message queue)?
I am aware of the solution of setting up my own message broker (RabbitMQ, Mosquito) etc and make it accessible to my nodes somehow (ssh-forwardning to a third host etc). But my questions is primarily about any solution that doesn't require me to do that, i.e. any solutions that utilizes already available/accessible third party infrastructure. (i.e. are there any public message brokers I can use?)
How about Mosquitto: message broker that implements the MQ Telemetry Transport protocol versions 3.1 and 3.1.1. MQTT provides a lightweight method of carrying out messaging using a publish/subscribe model. This makes it suitable for "machine to machine" messaging. It supports encryption. Time to setup: approximatively 15 mins you should be up and running. Since it is a message broker, you can write your own code to ensure you can communicate with 3rd party solutions. Also, it achieves soft real-time, but depending on your setup you can achieve hard real-time. After you look into Mosquitto have a look at Paho, which is a port of Mosquito to Eclipse Foundation.
Paho also provides a Python Client, which offers support for both MQTT v3.1 and v3.1.1 on Python 2.7 or 3.x. It also provides some helper functions to make publishing one off messages to an MQTT server very straightforward. Plenty of documentation and examples to get you up and running.
I would recommend RabbitMQ or Redis (RabbitMQ preferred because it is a very mature technology and insanely reliable). ZMQ is an option if you want a single hop messaging system instead of a brokered messaging system such as RabbitMQ but ZMQ is harder to use than RabbitMQ. Depending on how you want to utilize the message passing (is it a task dispatch in which case you can use Celery or if you need a slightly more low-level access in which case use Kombu with librabbitmq transport )
Found https://www.cloudamqp.com/ which offers a free plan with a cloud based installation of RabbitMQ. I will try that and see if it fulfill my needs.
Related
Our use case involves one class that has to remotely initialize several instances of another class (each on a different IoT device) and has to get certain results from each of these instances. At most, we would need to receive 30 messages a second from each remote client, with each message being relatively small. What type of architecture would you all recommend to solve this?
We were thinking that each class that is located on the IoT device will serve as a server and the class that receives the results would be the client, so should we create a server, each with its own channel, for each IoT device? Or is it possible to have each IoT device use the same service on the same server (meaning there would be multiple instances of the same service on the same server but on different devices)?
The question would benefit from additional detail to help guide an answer.
gRPC (and its use of HTTP/2) are 'heavier' protocols than e.g. MQTT. MQTT is more commonly used with IoT devices as it has a smaller footprint. REST/HTTP (even though heavier than MQTT) may also have benefits for you over gRPC/HTTP2.
If you're committed to gRPC, I wonder whether it would not be better to invert your proposed architecture and have the IoT device be the client? This seems to provide additional security in that the clients initiate communications with your servers rather than expose services. Either way (and if you decide to use MQTT), hopefully you'll be using mTLS. I assume (!?) client implementations are smaller than server implementations.
Regardless of the orientation, clients and servers can (independently) stream messages. The IoT devices (client or server) could stream the 30 messages/second. The servers could stream management|control messages.
I've no experience managing fleets of IoT devices but, remote management|monitoring and over-the-air upgrades|patching are, I assume, important requirements for you. gRPC does not limit any of these capabilities but debugging can be more challenging. With e.g. REST/HTTP, it is trivial to curl endpoints but with gRPC (even with the excellent grpcurl) you'll be constrained to the services implemented. Yes, you can't call a non-existent REST API either but I find remote-debugging gRPC services more challenging than REST.
So I have a single twisted socket server that serves clients and eventually I'll need to add more servers. The problem is that connections to the server are unique and unable to be shared among multiple server instances.
This makes a problem if the servers are behind a load balancer, or if multiple users from a single chat are across multiple server instances, because a message to a chat won't successfully send to everyone.
How would I resolve this?
It may be a difficult task as balancing load can be improved according to the underlying protocol (like http for web servers).
Are you trying to design a load balancing system for basically any socket based application ? What I mean is that it is one thing to dispatch messages between multiples servers, ensuring correct synchronization, it is another thing to build a dynamic self-balancing system for any communication protocol.
To build your loadbalancer, you can use a "TCP proxy" like HAProxy (http://www.haproxy.org/)
To handle the communication between your application server instances (behind the load balancing server), you can use messaging like zeromq (http://zeromq.org/) or rabbitmq (http://www.rabbitmq.com/). You'll find some common architecture pattern there.
There are python libs for both zeromq and rabbitmq so the implementation within your twisted-based server is not too hard.
Recently I've been doing a lot of testing around different ways of serving our Django application. I've settled on uwsgi as it seems to fit our needs pretty well.
I've recently discovered that uwsgi also supports WebSockets and started looking into it and found some examples: https://github.com/unbit/uwsgi/blob/master/tests/
After running the example (websockets_chat.py) and taking a look through uwsgi's documention for their websockets implementation it appears as though you can only send broadcast, or global messages.
Has anyone managed to find a way to transmit a message to a particular user or does uwsgi not support that level of communication yet?
Cheers
There is nothing like broadcast or global messages in websockets specs. They only "upgrades" an http connection to a lower-level one. What you do with that connection is up to you. The examples show integration with redis as message exchanger but you are free to make other uses.
For your specific case you will need to build a shared list of connected users and implements routing. Remember, you cannot rely on node.js way as it is based on a single threaded setup so everything is way simpler. In uWSGI a websocket connection can happens on a thread, a process or a coroutine, so exchanging data between them is the key.
I am working on location-based services project where I have several sensors that need to send asynchronous readings to a server, which will correlate the readings and generate a result. There will be some level of sensor to sensor communication as well, and I am interested in using XMPP as a transport due to its efficient messaging, real-time nature and NAT traversal.
I am hoping to find an example of (python, or any other langauge) XMPP machine to machine (M2M) services, hopefully using a PubSub model for asynchronous communication rather than a polling-based RPC. I have not been able to find any examples online or in XMPP books that I have seen, as they seem to be mostly focused on XMPP for human interaction such as chat, video, etc.
The general requirements that I have to work with are:
1. Multiple sensors sharing data with each other over XMPP
2. Asynchronous (PubSub) communication, subscribing to messages of interest
3. Hopefully written in Python, but any language would be a good starting point
4. Server correlates data from all the sensors and generates results, which can be made available to subscribers
5. Easy configuration / setup through discovery
Any ideas about where to look, or a good starting point would be much appreciated.
Thanks!
XMPP for M2M sounds like a nice idea.
About clients and servers, see http://xmpp.org/about-xmpp/technology-overview/pubsub/
In pubsub server does basically all the hard work, and you have to implement very little intelligence to clients. But this depends on what you want to do with published information. I haven't tested any clients which actually do something with the published information.
This fits the pubsub model of XMPP pretty well.
All your machines would be both publishers and subscribers.
Your processing server in this case would also be another subscriber that will do its data processing as it receives published items.
Any example you find dealing with pubsub is easily applicable. In XMPP, whether the JID (Jabber ID) represents a user of a machine is irrelevant, and pubsub is not actually oriented toward human interaction, unlike say, Multi User Chat.
There are many XMPP servers that support pubsub. I have used Smack and OpenFire for a similar purpose myself. The server is of less importance to you, since any off the shelf product that supports PubSub will do the job. More importantly is a client library that has pubsub support. I know Smack has this, but it is a Java library not python.
I do not know anything with all those requisites but you can use SleekXMPP to build your own. It is pure python and well documented XMPP library. XMPP has been used to do computer-to-computer communication which is quite nice because you can just test it from your own chat client. Look for example, http://www.python.org/about/success/projectpipe/
Good luck
I am developing a testbed for cloud computing environment. I want to establish multiple client connection to a server. What I want is that, server first of all send a data to all the clients specifying sending_interval and then all the clients will keep on sending their data with a time gap of that time_interval (as specified by the server). Please help me out, how can I do the same using python socket program. (i.e. I want multiple client to single server connectivity and also client sending data with the time gap specified by server). Will be great-full if anyone can help me. Thanks in advance.
This problem is easily solved by the ZeroMQ socket library. It is production stable. It allows you to define publisher-subscriber relationships, where a publishing process will publish data on a port regardless of how many (0 to infinite) listening processes there are. They call this the PUB-SUB model; it's in their docs (link below).
It sounds like you want to set up a bunch of clients that are all publishers. They can subscribe to a controlling channel, which which will send updates to their configuration (how often to write). They also act as publishers, pushing out their own data at an interval specified by default/config channel/socket.
Then, you have one or more listening processes that listen to all the clients' published messages. Perhaps you could even have two listening processes, one for backup or DR, or whatever.
We're using ZeroMQ and loving the simplicity it gives; there's no connection errors because the publisher doesn't care if anyone is listening, and the subscriber can start before the publisher and if there's nothing there to listen to, it can just loop around and wait until there is.
Bindings are available in ALL languages (it's freaky). The Python binding isn't pure-python, it does require a C compiler, but is frighteningly fast, and the pub/sub example is a cut/paste, 'golly, it works!' experience.
Link: http://zeromq.org
There are MANY other methods available with this library, including message queues, etc. They have relatively complete documentation, too.
Multi-Client and Single server Socket programming can be achieved by Multithreading in Socket Programming. I have implemented both the method:
Single Client and Single Server
Multiclient and Single Server
In my GitHub Repo Link: https://github.com/shauryauppal/Socket-Programming-Python
What is Multi-threading Socket Programming?
Multithreading is a process of executing multiple threads simultaneously in a single process.
To understand well you can visit Link: https://www.geeksforgeeks.org/socket-programming-multi-threading-python/, written by me.