I'm struggling to design an efficient way to exchange information between processes in my LAN.
Till now, I've been working with one single RPi, and I had a bunch of python scripts running as services. The services communicated using sockets (multprocessing.connection Client and Listener), and it was kind of ok.
I recently installed another RPi with some further services, and I realized that as the number of services grows, the problem scales pretty badly. In general, I don't need all the services to communicate with any other, but I'm looking for an elegant solution to enable me to scale quickly in case I need to add other services.
So essentially I though I first need a map of where each service is, like
Service 1 -> RPi 1
Service 2 -> RPi 2
...
The first approach I came up with was the following:
I thought I could add an additional "gateway" service so that any application running in RPx would send its data/request to the gateway, and the gateway would then forward it to the proper service or the gateway running on the other device.
Later I also realized that I could actually just give the map to each service and let all the services manage their own connection. This would mean to open many listeners to the external address, though, and I'm not sure it's the best option.
Do you have any suggestions? I'm also interested in exploring different options to implement the actual connection, might the Client / Listener one not be efficient.
Thank you for your help. I'm learning so much with this project!
Related
I need to write a script to stress test the UDP server.It needs to simulate about 5000 online users and about 400 concurrent users.I couldn't find a similar function on Google, so I wrote a UDP client myself.But I had a problem simulating multiple clients.The solution I came up with:
One socket per client
How to mark online users and concurrent users when using multithreading and multiple sockets to simulate clients?
I encapsulate the client into classes,in this class __ init__ The method of adding one to a variable is used to record the of online users.In this way, concurrent operations cannot be performed successfully
Is it feasible to create 5000 sockets with threads? Is this a best practice? Good performance?
Other approaches?
Is there another approach I haven't thought of? Am I on the wrong track?
Is there a mature testing framework that can be used for reference?
Finally, English is not my mother tongue. Please forgive me for my typos or grammar.Thank you for your reading and look forward to your reply.
There is Apache JMeter tool which is free, open source and modular
There is UDP Request sampler plugin which adds support of the UDP protocol to JMeter, see
The "5000 online users and 400 concurrent users" requirement may be interpreted in the following manner: real users don't hammer the system under test non-stop, they need some time to "think" between operations, i.e. read text, type response, fill forms, take a phone call, etc. So you need to introduce realistic think times using JMeter Timers so you could come up with the configuration when:
5000 users are "online" (connected to the server)
4600 are not doing anything, just "sleeping"
400 are actively sending requests
As long as your machine is capable of doing this without running out of CPU, RAM, Network, etc - it should be fine, personally I would use something like greenlet
I got a existing software project, that has a relatively primitive plugin system and wanted to expand it by providing a web interface.
Since my application processes realtime data, websockets are the only option besides web rtc.
My previous attempt used zeromq domain sockets on the python side and a server in Node js that connected to the domain socket.
This solution works great and has some benefits over the plugin server, but I want to offer a simpler option for folks that don't need the benefits and don't want the extra complexity.
How would you go about implementing this and is it even possible to do so?
Otherwise, I'll still do a separate process, but use fastapi to build the stuff around the socket endpoint and start it up using subprocess to spawn a second process that also connects to the domain socket.
Hope my question is not stupid, or a rtfm case.
Our use case involves one class that has to remotely initialize several instances of another class (each on a different IoT device) and has to get certain results from each of these instances. At most, we would need to receive 30 messages a second from each remote client, with each message being relatively small. What type of architecture would you all recommend to solve this?
We were thinking that each class that is located on the IoT device will serve as a server and the class that receives the results would be the client, so should we create a server, each with its own channel, for each IoT device? Or is it possible to have each IoT device use the same service on the same server (meaning there would be multiple instances of the same service on the same server but on different devices)?
The question would benefit from additional detail to help guide an answer.
gRPC (and its use of HTTP/2) are 'heavier' protocols than e.g. MQTT. MQTT is more commonly used with IoT devices as it has a smaller footprint. REST/HTTP (even though heavier than MQTT) may also have benefits for you over gRPC/HTTP2.
If you're committed to gRPC, I wonder whether it would not be better to invert your proposed architecture and have the IoT device be the client? This seems to provide additional security in that the clients initiate communications with your servers rather than expose services. Either way (and if you decide to use MQTT), hopefully you'll be using mTLS. I assume (!?) client implementations are smaller than server implementations.
Regardless of the orientation, clients and servers can (independently) stream messages. The IoT devices (client or server) could stream the 30 messages/second. The servers could stream management|control messages.
I've no experience managing fleets of IoT devices but, remote management|monitoring and over-the-air upgrades|patching are, I assume, important requirements for you. gRPC does not limit any of these capabilities but debugging can be more challenging. With e.g. REST/HTTP, it is trivial to curl endpoints but with gRPC (even with the excellent grpcurl) you'll be constrained to the services implemented. Yes, you can't call a non-existent REST API either but I find remote-debugging gRPC services more challenging than REST.
I have a bit of an open ended questions for you all. I wish to create a simple chat-room such as this example here: https://www.geeksforgeeks.org/simple-chat-room-using-python/ but I am lost as how to do it over the internet rather than just local network.
Any pointers/help would be appricated!
Thanks :)
There are multiple ways about this. You can either:
Run locally and expose your Python chat system to the internet.
Run your Python chat system in some online server provider (Heroku, AWS, etc.).
The first method requires you to do some port-forwarding on your local network, essentially mapping your 127.0.0.1:8081 local server to your public IP (so you would connect via the internet as myip:8081). This method, however, comes with its limitations; when you turn off your computer you are also effectively turning off your server to the rest of the internet. The second method will ensure the server stays on at all times, and is likely what you are looking for. Heroku is a great starting point as they provide a free tier that you can test everything out.
So I've been racking my brain trying to implement a system in which computers on a network (where there are always three or more computers on the network) are able to asynchronously communicate with each other by sending each other data.
So far, all I've been able to find as far as solutions go is sockets--which, to my knowledge, requires a client and a server script. My first problem is that I'd like to remove any client or server roles since all of the computers on the network are decentralized and running the same script concurrently without a server. Secondly, all of the computers are sending other nodes (chosen at random) sensor data from a specific point in time. If, for example, I have 4 computers on the network and--since they're all running the same script--they decide to send their data to another computer at the same time, wouldn't that cause a wait lock since all of the nodes are trying to communicate with another computer, but those computers are unable to accept the connection because they're also trying to send data?
I've considered using multithreading to run my begin_sync and wait_sync functions concurrently, but I'm not sure whether or not that would work. Does anyone have any suggestions or ideas for solutions that I could look into?
Thanks for your time!
As per NotTheBatman's response, I was able to get this to work using sockets on multiple ports. As far as how I handled being able to wait for sensor data and query other nodes, I simply used multithreading with great success.