how does my computer handle multiple socket connections? - python

So I've written some python code that starts up two or three separate processes that each listen on different ports of the same socket, and then send the received data on to three different ports of local host.
I've noticed a slowdown when running more and more of these processes concurrently, and after testing to make sure the processes were running concurrently I can't come up with an explanation other than they are taking turns using the socket instead of all at the same time.
I couldn't find an explanation through google so can someone explain to me how exactly my computer handles connecting to multiple sockets and ports? Does it take turns connecting to each or can it connect to all of them simultaneously and send data and receive data simultaneously as well? Thanks.

This might be helpful:
http://www.nyu.edu/classes/jcf/g22.2262-001_sp10/slides/session10/JavaSockets.pdf
What is the difference between a port and a socket?
Since everything is sent via packets, it must take turns at some level.

Related

Is decentralized communication between 3+ computers in a network possible in python?

So I've been racking my brain trying to implement a system in which computers on a network (where there are always three or more computers on the network) are able to asynchronously communicate with each other by sending each other data.
So far, all I've been able to find as far as solutions go is sockets--which, to my knowledge, requires a client and a server script. My first problem is that I'd like to remove any client or server roles since all of the computers on the network are decentralized and running the same script concurrently without a server. Secondly, all of the computers are sending other nodes (chosen at random) sensor data from a specific point in time. If, for example, I have 4 computers on the network and--since they're all running the same script--they decide to send their data to another computer at the same time, wouldn't that cause a wait lock since all of the nodes are trying to communicate with another computer, but those computers are unable to accept the connection because they're also trying to send data?
I've considered using multithreading to run my begin_sync and wait_sync functions concurrently, but I'm not sure whether or not that would work. Does anyone have any suggestions or ideas for solutions that I could look into?
Thanks for your time!
As per NotTheBatman's response, I was able to get this to work using sockets on multiple ports. As far as how I handled being able to wait for sensor data and query other nodes, I simply used multithreading with great success.

Efficient way to send results every 1-30 seconds from one machine to another

Key points:
I need to send roughly ~100 float numbers every 1-30 seconds from one machine to another.
The first machine is catching those values through sensors connected to it.
The second machine is listening for them, passing them to an http server (nginx), a telegram bot and another program sending emails with alerts.
How would you do this and why?
Please be accurate. It's the first time I work with sockets and with python, but I'm confident I can do this. Just give me crucial details, lighten me up!
Some small portion (a few rows) of the core would be appreciated if you think it's a delicate part, but the main goal of my question is to see the big picture.
Main thing here is to decide on a connection design and to choose protocol. I.e. will you have a persistent connection to your server or connect each time when new data is ready to it.
Then will you use HTTP POST or Web Sockets or ordinary sockets. Will you rely exclusively on nginx or your data catcher will be another serving service.
This would be a most secure way, if other people will be connecting to nginx to view sites etc.
Write or use another server to run on another port. For example, another nginx process just for that. Then use SSL (i.e. HTTPS) with basic authentication to prevent anyone else from abusing the connection.
Then on client side, make a packet every x seconds of all data (pickle.dumps() or json or something), then connect to your port with your credentials and pass the packet.
Python script may wait for it there.
Or you write a socket server from scratch in Python (not extra hard) to wait for your packets.
The caveat here is that you have to implement your protocol and security. But you gain some other benefits. Much more easier to maintain persistent connection if you desire or need to. I don't think it is necessary though and it can become bulky to code break recovery.
No, just wait on some port for a connection. Client must clearly identify itself (else you instantly drop the connection), it must prove that it talks your protocol and then send the data.
Use SSL sockets to do it so that you don't have to implement encryption yourself to preserve authentication data. You may even rely only upon in advance built keys for security and then pass only data.
Do not worry about the speed. Sockets are handled by OS and if you are on Unix-like system you may connect as many times you want in as little time interval you need. Nothing short of DoS attack won't inpact it much.
If on Windows, better use some finished server because Windows sometimes do not release a socket on time so you will be forced to wait or do some hackery to avoid this unfortunate behaviour (non blocking sockets and reuse addr and then some flo control will be needed).
As far as your data is small you don't have to worry much about the server protocol. I would use HTTPS myself, but I would write myown light-weight server in Python or modify and run one of examples from internet. That's me though.
The simplest thing that could possibly work would be to take your N floats, convert them to a binary message using struct.pack(), and then send them via a UDP socket to the target machine (if it's on a single LAN you could even use UDP multicast, then multiple receivers could get the data if needed). You can safely send a maximum of 60 to 170 double-precision floats in a single UDP datagram (depending on your network).
This requires no application protocol, is easily debugged at the network level using Wireshark, is efficient, and makes it trivial to implement other publishers or subscribers in any language.

python ftpclient limit connections

I have a bit of a problem with the ftplib from python. It seems that it uses, per default, two connections (one for sending commands, one for datatransfer?). However my ftpserver only accepts one connection at any given time. Since the only file that needs to be transfered is only about 1 MB large, the reasoning of being able to abort inflight commands does not apply here.
Previously the same job was done by the windows commandline ftp client. So I could just call this client from python, but I would really prefer a complete python solution.
Is there a way to tell ftplib, that it should limit itself to a single connection? In filezilla I'm able to "limit the maximum number of simultanious connections", ideally I would like to reproduce this functionality.
Thanks for your help.
It seems that it uses, per default, two connections (one for sending commands, one for datatransfer?).
That's how ftp works. You have a control connection (usually port 21) for commands and a data connection for data transfer, file listing etc and a dynamic port.
However my ftpserver only accepts one connection at any given time.
ftpserver might have a limit for multiple control connections, but it must still accept data connections. Could you please show from tcpdump, wireshark, logfiles etc why you think multiple connections are the problem?
In filezilla I'm able to "limit the maximum number of simultanious connections"
This is for the number of control connections only. Does it work with filezilla? Because I doubt that ftplib opens multiple control connections.

Python Socket Programming

I am developing a testbed for cloud computing environment. I want to establish multiple client connection to a server. What I want is that, server first of all send a data to all the clients specifying sending_interval and then all the clients will keep on sending their data with a time gap of that time_interval (as specified by the server). Please help me out, how can I do the same using python socket program. (i.e. I want multiple client to single server connectivity and also client sending data with the time gap specified by server). Will be great-full if anyone can help me. Thanks in advance.
This problem is easily solved by the ZeroMQ socket library. It is production stable. It allows you to define publisher-subscriber relationships, where a publishing process will publish data on a port regardless of how many (0 to infinite) listening processes there are. They call this the PUB-SUB model; it's in their docs (link below).
It sounds like you want to set up a bunch of clients that are all publishers. They can subscribe to a controlling channel, which which will send updates to their configuration (how often to write). They also act as publishers, pushing out their own data at an interval specified by default/config channel/socket.
Then, you have one or more listening processes that listen to all the clients' published messages. Perhaps you could even have two listening processes, one for backup or DR, or whatever.
We're using ZeroMQ and loving the simplicity it gives; there's no connection errors because the publisher doesn't care if anyone is listening, and the subscriber can start before the publisher and if there's nothing there to listen to, it can just loop around and wait until there is.
Bindings are available in ALL languages (it's freaky). The Python binding isn't pure-python, it does require a C compiler, but is frighteningly fast, and the pub/sub example is a cut/paste, 'golly, it works!' experience.
Link: http://zeromq.org
There are MANY other methods available with this library, including message queues, etc. They have relatively complete documentation, too.
Multi-Client and Single server Socket programming can be achieved by Multithreading in Socket Programming. I have implemented both the method:
Single Client and Single Server
Multiclient and Single Server
In my GitHub Repo Link: https://github.com/shauryauppal/Socket-Programming-Python
What is Multi-threading Socket Programming?
Multithreading is a process of executing multiple threads simultaneously in a single process.
To understand well you can visit Link: https://www.geeksforgeeks.org/socket-programming-multi-threading-python/, written by me.

Multiple programs using the same UDP port? Possible?

I currently have a small Python script that I'm using to spawn multiple executables, (voice chat servers), and in the next version of the software, the servers have the ability to receive heartbeat signals on the UDP port. (There will be possibly thousands of servers on one machine, ranging from ports 7878 and up)
My problem is that these servers might (read: will) be running on the same machine as my Python script and I had planned on opening a UDP port, and just sending the heartbeat, waiting for the reply, and voila...I could restart servers when/if they weren't responding by killing the task and re-loading the server.
Problem is that I cannot open a UDP port that the server is already using. Is there a way around this? The project lead is implementing the heartbeat still, so I'm sure any suggestions in how the heartbeat system could be implemented would be welcome also. -- This is a pretty generic script though that might apply to other programs so my main focus is still communicating on that UDP port.
This isn't possible. What you'll have to do is have one UDP master program that handles all UDP communication over the one port, and communicates with your servers in another way (UDP on different ports, named pipes, ...)
I'm pretty sure this is possible on Linux; I don't know about other UNIXes.
There are two ways to propagate a file descriptor from one process to another:
When a process fork()s, the child inherits all the file descriptors of the parent.
A process can send a file descriptor to another process over a "UNIX Domain Socket". See sendmsg() and recvmsg(). In Python, the _multiprocessing extension module will do this for you; see _multiprocessing.sendfd() and _multiprocessing.recvfd().
I haven't experimented with multiple processes listening on UDP sockets. But for TCP, on Linux, if multiple processes all listen on a single TCP socket, one of them will be randomly chosen when a connection comes in. So I suspect Linux does something sensible when multiple processes are all listening on the same UDP socket.
Try it and let us know!

Categories