Is this a good behaviour of a both TCP and UDP program? - python

I'm developping a client-server game in Python, and each quantum, the server has to send the state of the game to the clients.
I developed it with both UDP and TCP connections. UDP ensures the speed sending of the game states, and TCP is used for the reliability part.
Is this a good way of doing ?
So each quantum server sends data like this :
while playing:
data = computeGameData()
sendNewPlayUDP(data)
sendNewPlayTCP(data)
time.sleep(sleeptime)
I tested it, and it seems to work well, but I wonder if the thread can block because of TCP struggling. There is maybe a better way of doing.

According to :
http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/
you should not use TCP at all. This articles recommends using UDP and adding extra logic for the packets you absolutely want to be received and acknowledged. This article also states that TCP packets may interfere with UDP packets, increasing UDP packet loss rate.
You may also have a look to :
https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
Loosing packets can be tolerated in many cases. It looks like a bit overkill to send the same data on both TCP and UDP channels.

Related

Listen to UDP socket using Apache ActiveMQ?

I have an application, foo which takes in data, does stuff to it, and then publishes the new treated data over AMQ for another downstream application to grab. Until this point, foo has always gotten its data by connecting to another AMQ server which another script is publishing packetized data to (a lot of handwaving here, but the specifics don't really matter).
Recently a change has been made, and foo needs to be able to grab its data from a UDP socket. Is AMQ able to connect to this socket and receive/listen to the data being transmitted over it? From my understanding, AMQ uses TCP to establish connection to the client, and some initial research points me to this UDP Transport documentation from Apache, but not much else.
Alternatively, I could develop a rough UDP socket listener in Python, and then publish those messages to AMQ for foo to grab, but it would be optimal to have it all included in foo itself.
Not necessarily looking for an exhaustive solution here; quick and dirty would be enough to get me started.
Thanks!
ActiveMQ itself is a broker and therefore doesn't connect to sockets and listen for messages. It is the job of a client to connect to the broker and send and/or receive messages.
The UDP transport documentation is just theoretical as far as I know. It is technically possible to use UDP as the base of a traditional messaging protcol, but I've never actually seen it done since UDP is unreliable. The documentation even says, "Note that by default UDP is not reliable; datagrams can be lost so you should add a reliability layer to ensure the JMS contract can be implemented on a non-reliable transport." Adding a "reliability layer" is impractical when TCP can simply be used instead. All of the protocols which ActiveMQ supports (i.e. AMQP, STOMP, MQTT, OpenWire) fundamentally require a reliable network transport.
I definitely think you'll need some kind of intermediary process to read the data from the UDP socket and push it to the broker.

Python UDP socket misses packets

I'm implementing client-server communication using UDP that's used for FTP. First off, you don't need to tell me that UDP is unreliable, I know. My approach is: client asks for a file, server blasts the client with udp packets with sequence numbers, then says "what'd you miss?", resending those. On a local network, packet loss is < 1%. I'm pretty new to socket programming, so I'm not familiar with all the socket options (of which most examples found on google are for tcp).
My problem is why my client's receiving of this data.
PACKET_SIZE = 9216
mysocket.sendto('GO!', server_addr)
while True:
resp = mysocket.recv(PACKET_SIZE)
worker_thread.enqeue_packet(resp)
But by the time it gets back up to .recv(), it's missed a few udp packets (that I've confirmed are being sent using wireshark). I can fix this by making the server send slightly slower (actually, including logging statements is enough of a delay to make everything function).
How can i make sure that socket.recv doesn't miss anything in the time it takes to process a packet? I've tried pushing the data out to a separate thread that pushes it into a queue, but it's still not enough.
Any ideas? select, recv_into, setblocking?
While you already know, that UDP is not reliable, you maybe missed the other advantages of TCP. Relevant for you is that TCP has flow control and automatically scales down if the receiver is unable to cope with the senders speed (e.g. packet loss). So for normal connections TCP should be preferred for data transfer. For high latency connections (satellite link) it behaves too bad in the default configuration, so that some people design there custom transfer protocols (mostly with UDP), while others just tune the existing TCP stack.
I don't know why you use UDP, but if you want to continue to use it you should add some kind of back channel to the sender to inform it from current packet loss, so that it can scale down. Maybe you should have a look at RTCP, which accompanies RTP (used for VoIP etc).

TCP Hole Punching works using Java sockets and not with Python

I read the paper on TCP hole punching available here.
In order to do this one has to bind the sockets which are used for making TCP connections to a remote host and that which the local host uses to listen for connections to the same port. I have been able to do this in Java but cannot in Python even when the SO_REUSEADDR flag is set for the given sockets. Can someone explain to me why? Is it because Python is in inherently single-threaded?
From far as I tested/studied TCP Hole Punching is not a viable technique that will work in every situation.
First, what TCP Hole Punching does is not well supported by NATs and their behavior is unpredictable.
In resume it relies on sending a TCP SYN packet and recieving TCP SYN packet (when in a normal conversation you would respond with SYN+ACK) so that the NAT would open a connection between the two hosts. Some NATs may open this connection while others dont.
The best way I know to acomplish NAT Traversal is to use UDP. Since UDP is not connection oriented, you can start sending packet and reciving so the NAT will think one packet is a reply form the other.
See
UDP Hole Punching
Also, to make UDP as reliable as TCP you can use an implementation of TCP over UDP.
See UDT
I am sorry I didnt answer your question, but why in Java worked and why in Python did not, is hard to know, it has something to do with virtual machines implementation and system calls or even the NAT you are using.

How to test server behavior under network loss at every possible packet

I'm working with mobile, so I expect network loss to be common. I'm doing payments, so each request matters.
I would like to be able to test my server to see precisely how it will behave with client network loss at different points in the request cycle -- specifically between any given packet send/receive during the entire network communication.
I suspect that the server will behave slightly differently if the communication is lost while sending the response vs. while waiting for a FIN-ACK, and I want to know which timings of disconnections I can distinguish.
I tried simulating an http request using scapy, and stopping communication between each TCP packet. (I.e.: first send SYN then disappear; then send SYN and receive SYN-ACK and then disappear; then send SYN and receive SYN-ACK and send ACK and then disappear; etc.) However, I quickly got bogged down in the details of trying to reproduce a functional TCP stack.
Is there a good existing tool to automate/enable this kind of testing?
Unless your application is actually responding to and generating its own IP packets (which would be incredibly silly), you probably don't need to do testing at that layer. Simply testing at the TCP layer (e.g, connect(), send(), recv(), shutdown()) will probably be sufficient, as those events are the only ones which your server will be aware of.

Problem with asyn icmp ping

I'm writing service in python that async ping domains. So it must be able to ping many ip's at the same time. I wrote it on epoll ioloop, but have problem with packets loss.
When there are many simultaneous ICMP requests much part of replies on them didn't reach my servise. What may cause this situation and how i can make my service ping many hosts at the same time without packet loss?
Thanks)
A problem you might be having is due to the fact that ICMP is layer 3 of the OSI model and does not use a port for communication. In short, ICMP isn't really designed for this. The desired behavior is still possible but perhaps the IP Stack you are using is getting in the way and if this is on a Windows system then 100% sure this is your problem. I would fire up Wireshark to make sure you are actually getting incoming packets, if this is the case then I would use libpcap to track in ICMP replies. If the problem is with sending then you'll have to use raw sockets and build your own ICMP packets.

Categories