Multiple programs using the same UDP port? Possible? - python

I currently have a small Python script that I'm using to spawn multiple executables, (voice chat servers), and in the next version of the software, the servers have the ability to receive heartbeat signals on the UDP port. (There will be possibly thousands of servers on one machine, ranging from ports 7878 and up)
My problem is that these servers might (read: will) be running on the same machine as my Python script and I had planned on opening a UDP port, and just sending the heartbeat, waiting for the reply, and voila...I could restart servers when/if they weren't responding by killing the task and re-loading the server.
Problem is that I cannot open a UDP port that the server is already using. Is there a way around this? The project lead is implementing the heartbeat still, so I'm sure any suggestions in how the heartbeat system could be implemented would be welcome also. -- This is a pretty generic script though that might apply to other programs so my main focus is still communicating on that UDP port.

This isn't possible. What you'll have to do is have one UDP master program that handles all UDP communication over the one port, and communicates with your servers in another way (UDP on different ports, named pipes, ...)

I'm pretty sure this is possible on Linux; I don't know about other UNIXes.
There are two ways to propagate a file descriptor from one process to another:
When a process fork()s, the child inherits all the file descriptors of the parent.
A process can send a file descriptor to another process over a "UNIX Domain Socket". See sendmsg() and recvmsg(). In Python, the _multiprocessing extension module will do this for you; see _multiprocessing.sendfd() and _multiprocessing.recvfd().
I haven't experimented with multiple processes listening on UDP sockets. But for TCP, on Linux, if multiple processes all listen on a single TCP socket, one of them will be randomly chosen when a connection comes in. So I suspect Linux does something sensible when multiple processes are all listening on the same UDP socket.
Try it and let us know!

Related

how does my computer handle multiple socket connections?

So I've written some python code that starts up two or three separate processes that each listen on different ports of the same socket, and then send the received data on to three different ports of local host.
I've noticed a slowdown when running more and more of these processes concurrently, and after testing to make sure the processes were running concurrently I can't come up with an explanation other than they are taking turns using the socket instead of all at the same time.
I couldn't find an explanation through google so can someone explain to me how exactly my computer handles connecting to multiple sockets and ports? Does it take turns connecting to each or can it connect to all of them simultaneously and send data and receive data simultaneously as well? Thanks.
This might be helpful:
http://www.nyu.edu/classes/jcf/g22.2262-001_sp10/slides/session10/JavaSockets.pdf
What is the difference between a port and a socket?
Since everything is sent via packets, it must take turns at some level.

Python Socket/Interrupt for beginner

I'm building a network music player with my Raspberry Pi and I'm trying to come up with a scheme that will allow me to send a "command" to my Pi that will allow it to do various things over the network (such as transport control).
This is what I'm thinking on the receiver (in sort-of pseudo-code):
while True:
while nothingIsRecvD:
do_stuff()
do_something_with(theDataRecvDfromSocket)
Is there some basic code for beginners I can look at?
You'll need to use the socket module and the select module.
To set up the socket, you'll need to
Use socket.socket to create a socket. You'll probably want to use the AF_INET address family. For TCP, use SOCK_STREAM; for UDP, use SOCK_DGRAM.
bind the socket to the interface and port you want to listen on.
For TCP, call listen on the socket. 5 is the typical backlog value used.
If you're using TCP, you've just created a listening socket. In order to actually receive data, you'll need to accept a connection using accept. With a connected socket you can recv or send data.
UDP is similar, except accepting is not necessary and you'll use recvfrom and sendto rather than recv and send.
These methods block, however, and if I understand you correctly, you don't want that. select.select lets you wait for an event to occur on any of a given set of sockets. You can also provide a zero timeout if you want to just check if there is some activity. Once it has detected activity, you can usually perform the appropriate action once without blocking.
Once you're done with sockets, be polite and close them after shutting down any connected sockets.
You could consider using sockets to communicate between the music player and server. The recv() call (typically used with TCP sockets) or recvfrom() call (typically used with UDP sockets) are blocking -- so they should provide a nice blocking context to your nothingIsRecvd case and would allow you to get rid of the "while True" loop. You can find examples on Python Library reference: http://docs.python.org/release/2.5.2/lib/socket-example.html

python sockets and a serial to IP device

Using a Lantronix UDS-1100 serial to IP converter. The goal is to write a small proof of concept piece in Python to capture serial data output by this device over IP.
I've done a couple test projects using sockets in python, but they were all done between python processes (python > python): listen() on one end, and connect(), sendall() etc on the other.
I think I can use sockets for this project, but before I invest a bunch of time into it, wanted to make sure it is a viable solution.
Can python sockets be used to capture IP traffic when the traffic is originating from a non-python source? I have full control over the IP and port that the device sends the serial data to, but there will be no python connect() initiated by the client. I can pre-pend then serial data with some connect() string if needed.
If sockets won't work, please recommend another solution...guessing it will be REST or similar.
Of course. TCP/IP is supposed to be cross-platform and cross-language, so in theory you should be able to communicate with every kind of device as long as you manage to process and send the expected protocol.

Python Socket Programming

I am developing a testbed for cloud computing environment. I want to establish multiple client connection to a server. What I want is that, server first of all send a data to all the clients specifying sending_interval and then all the clients will keep on sending their data with a time gap of that time_interval (as specified by the server). Please help me out, how can I do the same using python socket program. (i.e. I want multiple client to single server connectivity and also client sending data with the time gap specified by server). Will be great-full if anyone can help me. Thanks in advance.
This problem is easily solved by the ZeroMQ socket library. It is production stable. It allows you to define publisher-subscriber relationships, where a publishing process will publish data on a port regardless of how many (0 to infinite) listening processes there are. They call this the PUB-SUB model; it's in their docs (link below).
It sounds like you want to set up a bunch of clients that are all publishers. They can subscribe to a controlling channel, which which will send updates to their configuration (how often to write). They also act as publishers, pushing out their own data at an interval specified by default/config channel/socket.
Then, you have one or more listening processes that listen to all the clients' published messages. Perhaps you could even have two listening processes, one for backup or DR, or whatever.
We're using ZeroMQ and loving the simplicity it gives; there's no connection errors because the publisher doesn't care if anyone is listening, and the subscriber can start before the publisher and if there's nothing there to listen to, it can just loop around and wait until there is.
Bindings are available in ALL languages (it's freaky). The Python binding isn't pure-python, it does require a C compiler, but is frighteningly fast, and the pub/sub example is a cut/paste, 'golly, it works!' experience.
Link: http://zeromq.org
There are MANY other methods available with this library, including message queues, etc. They have relatively complete documentation, too.
Multi-Client and Single server Socket programming can be achieved by Multithreading in Socket Programming. I have implemented both the method:
Single Client and Single Server
Multiclient and Single Server
In my GitHub Repo Link: https://github.com/shauryauppal/Socket-Programming-Python
What is Multi-threading Socket Programming?
Multithreading is a process of executing multiple threads simultaneously in a single process.
To understand well you can visit Link: https://www.geeksforgeeks.org/socket-programming-multi-threading-python/, written by me.

How to detect non-graceful disconnect of Twisted on Linux?

I wrote a server based on Twisted, and I encountered a problem, some of the clients are disconnected not gracefully. For example, the user pulls out the network cable.
For a while, the client on Windows is disconnected (the connectionLost is called, and it is also written in Twisted). And on the Linux server side, my connectionLost of twisted is never triggered. Even it try to writes data to client when the connection is lost. Why Twisted can't detect those non-graceful disconnection (even write data to client) on Linux? How to makes Twisted detect non-graceful disconnections? Because the feature Twisted can't detect non-graceful, I have lots of zombie user on my server.
---- Update ----
I thought it might be the feature of socket of unix-like os, so, what is the behavior of socket on unix-like for handling situation like this?
Thanks.
Victor Lin.
You're describing the behavior of TCP connections on an unreliable network. Twisted is merely exposing this behavior: after all, when you set up a TCP connection with Twisted, it is nothing more than a TCP connection.
You're mistaken when you say that the connectionLost callback isn't invoked even if you try to send data over it. After two minutes, the underlying TCP connection will disappear and Twisted will inform you of this by calling connectionLost.
If you need to detect this condition more quickly than that, then you can implement your own timeouts using reactor.callLater.
Seconding what Jean-Paul said, if you need more fine grained TCP connection management, just use reactor.CallLater. We have exactly that implementation on a Twisted/wxPython trading platform, and it works a treat. You might also want to tweak the behaviour of the ReconnectingClientFactory in order to achieve the results I understand your looking for.

Categories