How can I create a UDP server in Python that is possible to know when a client has disconnected? The server needs to be fast because I will use in an MMORPG. Never did a UDP server then I have a little trouble.
There is no such thing as a connection in UDP. Because of this, it becomes your responsibility to detect if the client has disconnected. Generally speaking, your protocol should implement a way to notify the server that it is ending its session. Additionally, you will need to implement some type of timeout functionality such that after a certain period of interactivity, the session is ended.
Note that UDP is more difficult to work with than TCP because packets are not always guaranteed to be delivered. Depending on what you are doing, you might need to implement some type of check to ensure that packets that are not delivered are sent again. TCP does this for you, but it also has the side effect of making the protocol slower.
This answer provides some more considerations: https://stackoverflow.com/a/57489/4250606
UDP is not connection-based. Since no connection exists when using UDP, there is nothing to disconnect. Since there is nothing to disconnect, you can't ever know when something disconnects. It never will because it was never connected in the first place.
Related
I have an application, foo which takes in data, does stuff to it, and then publishes the new treated data over AMQ for another downstream application to grab. Until this point, foo has always gotten its data by connecting to another AMQ server which another script is publishing packetized data to (a lot of handwaving here, but the specifics don't really matter).
Recently a change has been made, and foo needs to be able to grab its data from a UDP socket. Is AMQ able to connect to this socket and receive/listen to the data being transmitted over it? From my understanding, AMQ uses TCP to establish connection to the client, and some initial research points me to this UDP Transport documentation from Apache, but not much else.
Alternatively, I could develop a rough UDP socket listener in Python, and then publish those messages to AMQ for foo to grab, but it would be optimal to have it all included in foo itself.
Not necessarily looking for an exhaustive solution here; quick and dirty would be enough to get me started.
Thanks!
ActiveMQ itself is a broker and therefore doesn't connect to sockets and listen for messages. It is the job of a client to connect to the broker and send and/or receive messages.
The UDP transport documentation is just theoretical as far as I know. It is technically possible to use UDP as the base of a traditional messaging protcol, but I've never actually seen it done since UDP is unreliable. The documentation even says, "Note that by default UDP is not reliable; datagrams can be lost so you should add a reliability layer to ensure the JMS contract can be implemented on a non-reliable transport." Adding a "reliability layer" is impractical when TCP can simply be used instead. All of the protocols which ActiveMQ supports (i.e. AMQP, STOMP, MQTT, OpenWire) fundamentally require a reliable network transport.
I definitely think you'll need some kind of intermediary process to read the data from the UDP socket and push it to the broker.
Key points:
I need to send roughly ~100 float numbers every 1-30 seconds from one machine to another.
The first machine is catching those values through sensors connected to it.
The second machine is listening for them, passing them to an http server (nginx), a telegram bot and another program sending emails with alerts.
How would you do this and why?
Please be accurate. It's the first time I work with sockets and with python, but I'm confident I can do this. Just give me crucial details, lighten me up!
Some small portion (a few rows) of the core would be appreciated if you think it's a delicate part, but the main goal of my question is to see the big picture.
Main thing here is to decide on a connection design and to choose protocol. I.e. will you have a persistent connection to your server or connect each time when new data is ready to it.
Then will you use HTTP POST or Web Sockets or ordinary sockets. Will you rely exclusively on nginx or your data catcher will be another serving service.
This would be a most secure way, if other people will be connecting to nginx to view sites etc.
Write or use another server to run on another port. For example, another nginx process just for that. Then use SSL (i.e. HTTPS) with basic authentication to prevent anyone else from abusing the connection.
Then on client side, make a packet every x seconds of all data (pickle.dumps() or json or something), then connect to your port with your credentials and pass the packet.
Python script may wait for it there.
Or you write a socket server from scratch in Python (not extra hard) to wait for your packets.
The caveat here is that you have to implement your protocol and security. But you gain some other benefits. Much more easier to maintain persistent connection if you desire or need to. I don't think it is necessary though and it can become bulky to code break recovery.
No, just wait on some port for a connection. Client must clearly identify itself (else you instantly drop the connection), it must prove that it talks your protocol and then send the data.
Use SSL sockets to do it so that you don't have to implement encryption yourself to preserve authentication data. You may even rely only upon in advance built keys for security and then pass only data.
Do not worry about the speed. Sockets are handled by OS and if you are on Unix-like system you may connect as many times you want in as little time interval you need. Nothing short of DoS attack won't inpact it much.
If on Windows, better use some finished server because Windows sometimes do not release a socket on time so you will be forced to wait or do some hackery to avoid this unfortunate behaviour (non blocking sockets and reuse addr and then some flo control will be needed).
As far as your data is small you don't have to worry much about the server protocol. I would use HTTPS myself, but I would write myown light-weight server in Python or modify and run one of examples from internet. That's me though.
The simplest thing that could possibly work would be to take your N floats, convert them to a binary message using struct.pack(), and then send them via a UDP socket to the target machine (if it's on a single LAN you could even use UDP multicast, then multiple receivers could get the data if needed). You can safely send a maximum of 60 to 170 double-precision floats in a single UDP datagram (depending on your network).
This requires no application protocol, is easily debugged at the network level using Wireshark, is efficient, and makes it trivial to implement other publishers or subscribers in any language.
I'm implementing client-server communication using UDP that's used for FTP. First off, you don't need to tell me that UDP is unreliable, I know. My approach is: client asks for a file, server blasts the client with udp packets with sequence numbers, then says "what'd you miss?", resending those. On a local network, packet loss is < 1%. I'm pretty new to socket programming, so I'm not familiar with all the socket options (of which most examples found on google are for tcp).
My problem is why my client's receiving of this data.
PACKET_SIZE = 9216
mysocket.sendto('GO!', server_addr)
while True:
resp = mysocket.recv(PACKET_SIZE)
worker_thread.enqeue_packet(resp)
But by the time it gets back up to .recv(), it's missed a few udp packets (that I've confirmed are being sent using wireshark). I can fix this by making the server send slightly slower (actually, including logging statements is enough of a delay to make everything function).
How can i make sure that socket.recv doesn't miss anything in the time it takes to process a packet? I've tried pushing the data out to a separate thread that pushes it into a queue, but it's still not enough.
Any ideas? select, recv_into, setblocking?
While you already know, that UDP is not reliable, you maybe missed the other advantages of TCP. Relevant for you is that TCP has flow control and automatically scales down if the receiver is unable to cope with the senders speed (e.g. packet loss). So for normal connections TCP should be preferred for data transfer. For high latency connections (satellite link) it behaves too bad in the default configuration, so that some people design there custom transfer protocols (mostly with UDP), while others just tune the existing TCP stack.
I don't know why you use UDP, but if you want to continue to use it you should add some kind of back channel to the sender to inform it from current packet loss, so that it can scale down. Maybe you should have a look at RTCP, which accompanies RTP (used for VoIP etc).
I'm building a network music player with my Raspberry Pi and I'm trying to come up with a scheme that will allow me to send a "command" to my Pi that will allow it to do various things over the network (such as transport control).
This is what I'm thinking on the receiver (in sort-of pseudo-code):
while True:
while nothingIsRecvD:
do_stuff()
do_something_with(theDataRecvDfromSocket)
Is there some basic code for beginners I can look at?
You'll need to use the socket module and the select module.
To set up the socket, you'll need to
Use socket.socket to create a socket. You'll probably want to use the AF_INET address family. For TCP, use SOCK_STREAM; for UDP, use SOCK_DGRAM.
bind the socket to the interface and port you want to listen on.
For TCP, call listen on the socket. 5 is the typical backlog value used.
If you're using TCP, you've just created a listening socket. In order to actually receive data, you'll need to accept a connection using accept. With a connected socket you can recv or send data.
UDP is similar, except accepting is not necessary and you'll use recvfrom and sendto rather than recv and send.
These methods block, however, and if I understand you correctly, you don't want that. select.select lets you wait for an event to occur on any of a given set of sockets. You can also provide a zero timeout if you want to just check if there is some activity. Once it has detected activity, you can usually perform the appropriate action once without blocking.
Once you're done with sockets, be polite and close them after shutting down any connected sockets.
You could consider using sockets to communicate between the music player and server. The recv() call (typically used with TCP sockets) or recvfrom() call (typically used with UDP sockets) are blocking -- so they should provide a nice blocking context to your nothingIsRecvd case and would allow you to get rid of the "while True" loop. You can find examples on Python Library reference: http://docs.python.org/release/2.5.2/lib/socket-example.html
I wrote a server based on Twisted, and I encountered a problem, some of the clients are disconnected not gracefully. For example, the user pulls out the network cable.
For a while, the client on Windows is disconnected (the connectionLost is called, and it is also written in Twisted). And on the Linux server side, my connectionLost of twisted is never triggered. Even it try to writes data to client when the connection is lost. Why Twisted can't detect those non-graceful disconnection (even write data to client) on Linux? How to makes Twisted detect non-graceful disconnections? Because the feature Twisted can't detect non-graceful, I have lots of zombie user on my server.
---- Update ----
I thought it might be the feature of socket of unix-like os, so, what is the behavior of socket on unix-like for handling situation like this?
Thanks.
Victor Lin.
You're describing the behavior of TCP connections on an unreliable network. Twisted is merely exposing this behavior: after all, when you set up a TCP connection with Twisted, it is nothing more than a TCP connection.
You're mistaken when you say that the connectionLost callback isn't invoked even if you try to send data over it. After two minutes, the underlying TCP connection will disappear and Twisted will inform you of this by calling connectionLost.
If you need to detect this condition more quickly than that, then you can implement your own timeouts using reactor.callLater.
Seconding what Jean-Paul said, if you need more fine grained TCP connection management, just use reactor.CallLater. We have exactly that implementation on a Twisted/wxPython trading platform, and it works a treat. You might also want to tweak the behaviour of the ReconnectingClientFactory in order to achieve the results I understand your looking for.