I am developing a testbed for cloud computing environment. I want to establish multiple client connection to a server. What I want is that, server first of all send a data to all the clients specifying sending_interval and then all the clients will keep on sending their data with a time gap of that time_interval (as specified by the server). Please help me out, how can I do the same using python socket program. (i.e. I want multiple client to single server connectivity and also client sending data with the time gap specified by server). Will be great-full if anyone can help me. Thanks in advance.
This problem is easily solved by the ZeroMQ socket library. It is production stable. It allows you to define publisher-subscriber relationships, where a publishing process will publish data on a port regardless of how many (0 to infinite) listening processes there are. They call this the PUB-SUB model; it's in their docs (link below).
It sounds like you want to set up a bunch of clients that are all publishers. They can subscribe to a controlling channel, which which will send updates to their configuration (how often to write). They also act as publishers, pushing out their own data at an interval specified by default/config channel/socket.
Then, you have one or more listening processes that listen to all the clients' published messages. Perhaps you could even have two listening processes, one for backup or DR, or whatever.
We're using ZeroMQ and loving the simplicity it gives; there's no connection errors because the publisher doesn't care if anyone is listening, and the subscriber can start before the publisher and if there's nothing there to listen to, it can just loop around and wait until there is.
Bindings are available in ALL languages (it's freaky). The Python binding isn't pure-python, it does require a C compiler, but is frighteningly fast, and the pub/sub example is a cut/paste, 'golly, it works!' experience.
Link: http://zeromq.org
There are MANY other methods available with this library, including message queues, etc. They have relatively complete documentation, too.
Multi-Client and Single server Socket programming can be achieved by Multithreading in Socket Programming. I have implemented both the method:
Single Client and Single Server
Multiclient and Single Server
In my GitHub Repo Link: https://github.com/shauryauppal/Socket-Programming-Python
What is Multi-threading Socket Programming?
Multithreading is a process of executing multiple threads simultaneously in a single process.
To understand well you can visit Link: https://www.geeksforgeeks.org/socket-programming-multi-threading-python/, written by me.
Related
Key points:
I need to send roughly ~100 float numbers every 1-30 seconds from one machine to another.
The first machine is catching those values through sensors connected to it.
The second machine is listening for them, passing them to an http server (nginx), a telegram bot and another program sending emails with alerts.
How would you do this and why?
Please be accurate. It's the first time I work with sockets and with python, but I'm confident I can do this. Just give me crucial details, lighten me up!
Some small portion (a few rows) of the core would be appreciated if you think it's a delicate part, but the main goal of my question is to see the big picture.
Main thing here is to decide on a connection design and to choose protocol. I.e. will you have a persistent connection to your server or connect each time when new data is ready to it.
Then will you use HTTP POST or Web Sockets or ordinary sockets. Will you rely exclusively on nginx or your data catcher will be another serving service.
This would be a most secure way, if other people will be connecting to nginx to view sites etc.
Write or use another server to run on another port. For example, another nginx process just for that. Then use SSL (i.e. HTTPS) with basic authentication to prevent anyone else from abusing the connection.
Then on client side, make a packet every x seconds of all data (pickle.dumps() or json or something), then connect to your port with your credentials and pass the packet.
Python script may wait for it there.
Or you write a socket server from scratch in Python (not extra hard) to wait for your packets.
The caveat here is that you have to implement your protocol and security. But you gain some other benefits. Much more easier to maintain persistent connection if you desire or need to. I don't think it is necessary though and it can become bulky to code break recovery.
No, just wait on some port for a connection. Client must clearly identify itself (else you instantly drop the connection), it must prove that it talks your protocol and then send the data.
Use SSL sockets to do it so that you don't have to implement encryption yourself to preserve authentication data. You may even rely only upon in advance built keys for security and then pass only data.
Do not worry about the speed. Sockets are handled by OS and if you are on Unix-like system you may connect as many times you want in as little time interval you need. Nothing short of DoS attack won't inpact it much.
If on Windows, better use some finished server because Windows sometimes do not release a socket on time so you will be forced to wait or do some hackery to avoid this unfortunate behaviour (non blocking sockets and reuse addr and then some flo control will be needed).
As far as your data is small you don't have to worry much about the server protocol. I would use HTTPS myself, but I would write myown light-weight server in Python or modify and run one of examples from internet. That's me though.
The simplest thing that could possibly work would be to take your N floats, convert them to a binary message using struct.pack(), and then send them via a UDP socket to the target machine (if it's on a single LAN you could even use UDP multicast, then multiple receivers could get the data if needed). You can safely send a maximum of 60 to 170 double-precision floats in a single UDP datagram (depending on your network).
This requires no application protocol, is easily debugged at the network level using Wireshark, is efficient, and makes it trivial to implement other publishers or subscribers in any language.
I am implementing a small distributed system (in Python) with nodes behind firewalls. What is the easiest way to pass messages between the nodes under the following restrictions:
I don't want to open any ports or punch holes in the firewall
Also, I don't want to export/forward any internal ports outside my network
Time delay less than, say 5 minutes, is acceptable, but closer to real time would be nice, if possible.
1+2 → I need to use a third party, accessible by all my nodes. From this follows, that I probably also want to use encryption
Solutions considered:
Email - by setting up separate or a shared free email accounts (e.g. Gmail) which each client connects to using IMAP/SMTP
Google docs - using a shared online spreadsheet (e.g. Google docs) and some python library for accessing/changing cells using a polling mechanism
XMPP using connections to a third party server
IRC
Renting a cheap 5$ VPS and setting up a Zero-MQ publish-subscribe node (or any other protocol) forwarded over SSH and having all nodes connect to it
Are there any other public (free) accessible message queues available (or platforms that can be misused as a message queue)?
I am aware of the solution of setting up my own message broker (RabbitMQ, Mosquito) etc and make it accessible to my nodes somehow (ssh-forwardning to a third host etc). But my questions is primarily about any solution that doesn't require me to do that, i.e. any solutions that utilizes already available/accessible third party infrastructure. (i.e. are there any public message brokers I can use?)
How about Mosquitto: message broker that implements the MQ Telemetry Transport protocol versions 3.1 and 3.1.1. MQTT provides a lightweight method of carrying out messaging using a publish/subscribe model. This makes it suitable for "machine to machine" messaging. It supports encryption. Time to setup: approximatively 15 mins you should be up and running. Since it is a message broker, you can write your own code to ensure you can communicate with 3rd party solutions. Also, it achieves soft real-time, but depending on your setup you can achieve hard real-time. After you look into Mosquitto have a look at Paho, which is a port of Mosquito to Eclipse Foundation.
Paho also provides a Python Client, which offers support for both MQTT v3.1 and v3.1.1 on Python 2.7 or 3.x. It also provides some helper functions to make publishing one off messages to an MQTT server very straightforward. Plenty of documentation and examples to get you up and running.
I would recommend RabbitMQ or Redis (RabbitMQ preferred because it is a very mature technology and insanely reliable). ZMQ is an option if you want a single hop messaging system instead of a brokered messaging system such as RabbitMQ but ZMQ is harder to use than RabbitMQ. Depending on how you want to utilize the message passing (is it a task dispatch in which case you can use Celery or if you need a slightly more low-level access in which case use Kombu with librabbitmq transport )
Found https://www.cloudamqp.com/ which offers a free plan with a cloud based installation of RabbitMQ. I will try that and see if it fulfill my needs.
I have a python program that reads in values from an ADC then writes them to a file as well as send them via TCP if a connection is available. I can send the data fine however as data is constantly being read I would like to be able to keep the connection open. How do I get the client to check that the Server has more data to send and thus to keep the connection open?
This scenario seems very similar to one of our applications. We use ZeroMQ for this.
Using ZeromMQ with PUB/SUB
On PyZMQ doc is example for using Pub/Sub.
Data provider creates PUB socket and sends messages to it.
Data consumer sets up SUB socket and reads messages from.
Typically, PUB socket is fixed part of infrastructure, so it binds to some port, and SUB connects. But if you like, you can switch it and it works too.
Advantages are:
provider sends messages to PUB socket and does not block on it
reconnects are handled automatically
if there is no consumer or connection, PUB socket silently drops the messages.
the code to implement this is very short
Other messaging patterns with ZeroMQ
PUB/SUB is just one option, there are other combinations like PUSH/PULL, REQ/REP etc.
I have some services running for years with PUSH/PULL and it stays "quite well" (typically there are 6 weeks before there is a need to do some restart, but this is rather due to problems on hardware than in ZeroMQ library.)
I need to have a tcp socket client connected to a server to send data and receive.
But this socket must be always on and i cannot open another socket.
I have always some data to send over the time and then later process the answer to the data sent previously.
If i could open many sockets, i think it was more easy. But in my case i have to send everything on the same socket asynchronously.
So the question is, what do you recommend to use within the Python ecosystem? (twisted, tornado, etc)
Should i consider node.js or another option?
I highly recommend Twisted for this:
It comes with out-of-the-box support for many TCP protocols.
It is easy to maintain a single connection, there is a ReconnectingClientFactory that will deal with disconnections and use exponential backoff, and LoopingCall makes it easy to implement a heartbeat.
Stateful protocols are also easy to implement and intermingle with complex business logic.
It's fun.
I have a service that is exactly like the one you mention (single login, stays on all the time, processes data). It's been on for months working like a champ.
Twisted is possibly hard to get your head around, but the tutorials here are a great start. Knowing Twisted will get you far in the long run!
"i have to send everything on the same socket asynchronously"
Add your data to a queue, have a separate thread taking items out of the queue and sending via socket.send()
Status Quo:
I have two python apps (frontend-server and data-collector, a database is 'between' them).
Currently using redis as db and its publish/subscribe protocol to notify the frontend when new data is available.
But may I want to use a different database (and don't want to keep redis on the system just for the pub/sub).
Are there any simple alternatives to notify my frontend if the data-collector has transacted new data to the database (without using an external message queue like beanstalkd or redis)?
ZeroMQ is a good option. It has good Python bindings, and it makes communicating between processes on the same machine and processes on different machines look almost identical.
Start by reading the guide: http://zguide.zeromq.org/page:all
As I mentioned in my comment, if you want something that is going across a network then other than setting up a web service (flask app?), or writing your own INET socket server there is nothing built in to the operating system to communicate between machines. Beanstalk has a very simple API in Python and I've used it for this kind of thing very successfully.
try:
beanstalk = beanstalkc.Connection(host="my.host.com")
beanstalk.watch("update_queue")
except:
print "Error connecting to beanstalk"
while True:
job = beanstalk.reserve()
do_something_with_job(job)
If you are only going to be working on the same machine, then read up on linux IPC. A socket connection between processes is very fast and has practically zero overhead. They can also be a part of an asynchronous program when you take advantage of epoll call backs.