I am writing a React app with a Flask backend that I want to be able to receive data through a serial port, process this data, and graph it. Currently I am using the backend to send certain pieces of data (current time and available ports) to React. I want to set up a background Python thread that will run continuously to read data from the serial port using pyserial, process it, and send it to React, but I'm not sure what the best way to accomplish this is. My initial search brought me to Celery; however, I'm not sure if it's a good option for a continuous task. Any help is much appreciated!
The problem is that reading from a serial port is normally done in a blocking way. That means you do not poll periodically, but instead you open the port once and then read all the time, waiting for new data to come.
What you need is a separate thread. This is a part of the program that runs in parallel to your normal web server. Then you need some sort of data base to communicate between that thread and the web server. If you want your data to be persistent between device and server restarts, you should install a real database like Postgres. If not, you can simply use an array in your application memory.
In the thread, read from the serial port and write the values to the database/array.
In your REST endpoint, you output the last X values.
Then your client can poll against this endpoint.
(If you want to do it really fancy, you can use a more event-driven approach, but this would be more complicated to implement)
Related
I currently have a program that does work on a large set of data, at one point in the process it sends the data to a server for more work to be done, then my program looks for the completed data periodically, sleeping if it is not ready and repeating until it fetches the data, then continuing to do work locally.
Instead of polling repeatedly until the external server has finished, it has the ability to send a simple http post to an address I designate once the work has finished.
So I assume I need flask running at an address that can receive the notification, but I'm unsure of the best way to incorporate flask into the original program. I am thinking just to split my program into 2 parts.
part1.py
does work --> send to external server
part1 ends
flask server.py
receives data --> spawns part2.py with received data
The original program uses multiprocessing pools to offset waiting for the server responses, but with using flask, can I just repeatedly spawn new instances of part2 to do work on the data as it is received?
Am I doing this all completely wrong, I've just put this together with some googling and feel out of my depth
U can use broker with a message queue implemented ex. Celery + Redis or RabbitMQ. Then, when the other server finishes doing whatever it has to do with the data it can produce an event, and the first server will receive a notification.
Key points:
I need to send roughly ~100 float numbers every 1-30 seconds from one machine to another.
The first machine is catching those values through sensors connected to it.
The second machine is listening for them, passing them to an http server (nginx), a telegram bot and another program sending emails with alerts.
How would you do this and why?
Please be accurate. It's the first time I work with sockets and with python, but I'm confident I can do this. Just give me crucial details, lighten me up!
Some small portion (a few rows) of the core would be appreciated if you think it's a delicate part, but the main goal of my question is to see the big picture.
Main thing here is to decide on a connection design and to choose protocol. I.e. will you have a persistent connection to your server or connect each time when new data is ready to it.
Then will you use HTTP POST or Web Sockets or ordinary sockets. Will you rely exclusively on nginx or your data catcher will be another serving service.
This would be a most secure way, if other people will be connecting to nginx to view sites etc.
Write or use another server to run on another port. For example, another nginx process just for that. Then use SSL (i.e. HTTPS) with basic authentication to prevent anyone else from abusing the connection.
Then on client side, make a packet every x seconds of all data (pickle.dumps() or json or something), then connect to your port with your credentials and pass the packet.
Python script may wait for it there.
Or you write a socket server from scratch in Python (not extra hard) to wait for your packets.
The caveat here is that you have to implement your protocol and security. But you gain some other benefits. Much more easier to maintain persistent connection if you desire or need to. I don't think it is necessary though and it can become bulky to code break recovery.
No, just wait on some port for a connection. Client must clearly identify itself (else you instantly drop the connection), it must prove that it talks your protocol and then send the data.
Use SSL sockets to do it so that you don't have to implement encryption yourself to preserve authentication data. You may even rely only upon in advance built keys for security and then pass only data.
Do not worry about the speed. Sockets are handled by OS and if you are on Unix-like system you may connect as many times you want in as little time interval you need. Nothing short of DoS attack won't inpact it much.
If on Windows, better use some finished server because Windows sometimes do not release a socket on time so you will be forced to wait or do some hackery to avoid this unfortunate behaviour (non blocking sockets and reuse addr and then some flo control will be needed).
As far as your data is small you don't have to worry much about the server protocol. I would use HTTPS myself, but I would write myown light-weight server in Python or modify and run one of examples from internet. That's me though.
The simplest thing that could possibly work would be to take your N floats, convert them to a binary message using struct.pack(), and then send them via a UDP socket to the target machine (if it's on a single LAN you could even use UDP multicast, then multiple receivers could get the data if needed). You can safely send a maximum of 60 to 170 double-precision floats in a single UDP datagram (depending on your network).
This requires no application protocol, is easily debugged at the network level using Wireshark, is efficient, and makes it trivial to implement other publishers or subscribers in any language.
Hi I have client server architecture.
1. server script:
-runs and listen to socket.
-on receiving client response, a new thread is forked to handle the client data
-each thread has to accept the data send by client and store to database
2. Client script:
- runs with timer of every 0.02 second and sends data to server through socket
Now When I run the both script, database get locked frequently.
please let me know how should I handle this.
If you required to see script then let me know.
Your question tags indicate that you are using SQLite. The SQLite database is not really designed for concurrent operation on the same database, its locks are per-database-file. This means that your threads are not running in parallel, but waiting for an exclusive lock on the entire database, which effectively serializes them.
If you need concurrent writes, you should switch to a client-server database that offers finer-grained locking of writes, such as PostgreSQL.
I am developing a testbed for cloud computing environment. I want to establish multiple client connection to a server. What I want is that, server first of all send a data to all the clients specifying sending_interval and then all the clients will keep on sending their data with a time gap of that time_interval (as specified by the server). Please help me out, how can I do the same using python socket program. (i.e. I want multiple client to single server connectivity and also client sending data with the time gap specified by server). Will be great-full if anyone can help me. Thanks in advance.
This problem is easily solved by the ZeroMQ socket library. It is production stable. It allows you to define publisher-subscriber relationships, where a publishing process will publish data on a port regardless of how many (0 to infinite) listening processes there are. They call this the PUB-SUB model; it's in their docs (link below).
It sounds like you want to set up a bunch of clients that are all publishers. They can subscribe to a controlling channel, which which will send updates to their configuration (how often to write). They also act as publishers, pushing out their own data at an interval specified by default/config channel/socket.
Then, you have one or more listening processes that listen to all the clients' published messages. Perhaps you could even have two listening processes, one for backup or DR, or whatever.
We're using ZeroMQ and loving the simplicity it gives; there's no connection errors because the publisher doesn't care if anyone is listening, and the subscriber can start before the publisher and if there's nothing there to listen to, it can just loop around and wait until there is.
Bindings are available in ALL languages (it's freaky). The Python binding isn't pure-python, it does require a C compiler, but is frighteningly fast, and the pub/sub example is a cut/paste, 'golly, it works!' experience.
Link: http://zeromq.org
There are MANY other methods available with this library, including message queues, etc. They have relatively complete documentation, too.
Multi-Client and Single server Socket programming can be achieved by Multithreading in Socket Programming. I have implemented both the method:
Single Client and Single Server
Multiclient and Single Server
In my GitHub Repo Link: https://github.com/shauryauppal/Socket-Programming-Python
What is Multi-threading Socket Programming?
Multithreading is a process of executing multiple threads simultaneously in a single process.
To understand well you can visit Link: https://www.geeksforgeeks.org/socket-programming-multi-threading-python/, written by me.
I need a way to simulate connectivity problems in an automated test suite, on Linux, and preferably from Python. Some sort of proxy that I can put in front of the web server that can hang or drop connections after one trigger or another (after X bytes transferred, etc) would be perfect.
It doesn't seem too hard to build, but I'd rather grab something pre-existing, if anyone has any good recommendations.
when i needed one, i found that building it yourself is the best thing..
start by raising a threaded server in python http://docs.python.org/dev/library/socketserver.html (you don't have to use the class itself).
and it's very simple:
in the new connection thread, you create a new socket and connects it to the real server.
then, you put both of them in a list and sends it to select.select (import select).
then, when socket x receive data - sends it to y. when socket y receives data sends it to x. (don't forget to close the socket when you receive empty string).
now you can do whatever you want..
if you need anything, i'm here..