Problem with asyn icmp ping - python

I'm writing service in python that async ping domains. So it must be able to ping many ip's at the same time. I wrote it on epoll ioloop, but have problem with packets loss.
When there are many simultaneous ICMP requests much part of replies on them didn't reach my servise. What may cause this situation and how i can make my service ping many hosts at the same time without packet loss?
Thanks)

A problem you might be having is due to the fact that ICMP is layer 3 of the OSI model and does not use a port for communication. In short, ICMP isn't really designed for this. The desired behavior is still possible but perhaps the IP Stack you are using is getting in the way and if this is on a Windows system then 100% sure this is your problem. I would fire up Wireshark to make sure you are actually getting incoming packets, if this is the case then I would use libpcap to track in ICMP replies. If the problem is with sending then you'll have to use raw sockets and build your own ICMP packets.

Related

scapy sniffing only packets on my computer & filter for http packets needed

I'm working on a project in which I sniff http packets that go through my network,
but scapy sniffs only packets that are sent to my computer or broadcasted.
I saw that there is a parameter called iface for the sniffing function-
sniff(iface= ? )
Yet, I find no documentation or explanation about it online.
Can someone explain how it can help and what value to put in it when sniffing if I want to sniff the whole network and not just my computer?
Also I don't find a filter function for http packets, so I'd appreciate it if someone could write it to me.
Here is some documentation on sniffing for Scapy. There is also some information regarding filters but it's quite sparse.
More than likely you will be able to use something like the following:
sniff(iface="eth0", filter="tcp and port 80") to get the HTTP packets. Obviously the actual interface will be different based on the names of the interfaces on your machine.

Is this a good behaviour of a both TCP and UDP program?

I'm developping a client-server game in Python, and each quantum, the server has to send the state of the game to the clients.
I developed it with both UDP and TCP connections. UDP ensures the speed sending of the game states, and TCP is used for the reliability part.
Is this a good way of doing ?
So each quantum server sends data like this :
while playing:
data = computeGameData()
sendNewPlayUDP(data)
sendNewPlayTCP(data)
time.sleep(sleeptime)
I tested it, and it seems to work well, but I wonder if the thread can block because of TCP struggling. There is maybe a better way of doing.
According to :
http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/
you should not use TCP at all. This articles recommends using UDP and adding extra logic for the packets you absolutely want to be received and acknowledged. This article also states that TCP packets may interfere with UDP packets, increasing UDP packet loss rate.
You may also have a look to :
https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
Loosing packets can be tolerated in many cases. It looks like a bit overkill to send the same data on both TCP and UDP channels.

Python UDP socket misses packets

I'm implementing client-server communication using UDP that's used for FTP. First off, you don't need to tell me that UDP is unreliable, I know. My approach is: client asks for a file, server blasts the client with udp packets with sequence numbers, then says "what'd you miss?", resending those. On a local network, packet loss is < 1%. I'm pretty new to socket programming, so I'm not familiar with all the socket options (of which most examples found on google are for tcp).
My problem is why my client's receiving of this data.
PACKET_SIZE = 9216
mysocket.sendto('GO!', server_addr)
while True:
resp = mysocket.recv(PACKET_SIZE)
worker_thread.enqeue_packet(resp)
But by the time it gets back up to .recv(), it's missed a few udp packets (that I've confirmed are being sent using wireshark). I can fix this by making the server send slightly slower (actually, including logging statements is enough of a delay to make everything function).
How can i make sure that socket.recv doesn't miss anything in the time it takes to process a packet? I've tried pushing the data out to a separate thread that pushes it into a queue, but it's still not enough.
Any ideas? select, recv_into, setblocking?
While you already know, that UDP is not reliable, you maybe missed the other advantages of TCP. Relevant for you is that TCP has flow control and automatically scales down if the receiver is unable to cope with the senders speed (e.g. packet loss). So for normal connections TCP should be preferred for data transfer. For high latency connections (satellite link) it behaves too bad in the default configuration, so that some people design there custom transfer protocols (mostly with UDP), while others just tune the existing TCP stack.
I don't know why you use UDP, but if you want to continue to use it you should add some kind of back channel to the sender to inform it from current packet loss, so that it can scale down. Maybe you should have a look at RTCP, which accompanies RTP (used for VoIP etc).

Subsuming the Linux packet processing stack

We occasionally have to debug glitchy Cisco routers that don't handle the TCP Selective Acknowledgment (SACK) options correctly. This causes our TCP sessions to die when routed through an IPTABLES port redirection rule.
To help with the diagnosis, I've been constructing a python-based utility to construct a sequence of packets that can reproduce this error at will, the implementation uses raw sockets to perform this trick. I've got an ICMP ping working nicely but I've run into a snag on the UDP implementation, I can construct, send and receive the packet without problem, the issue that I'm seeing is that Linux doesn't like the UDP packets being sent back from the remote system and always sends an ICMP Destination unreachable packet, even though my python script is able to receive and process the packet without any apparent problems.
My question: Is it possible to subsume the Linux UDP stack to bypass these ICMP error messages when working with RAW sockets?.
Thanks
Are you receiving and processing the packet and only need to suppress the ICMP port-unreachable? If so, maybe just add an entry to the iptables OUTPUT chain to drop it?

How to test server behavior under network loss at every possible packet

I'm working with mobile, so I expect network loss to be common. I'm doing payments, so each request matters.
I would like to be able to test my server to see precisely how it will behave with client network loss at different points in the request cycle -- specifically between any given packet send/receive during the entire network communication.
I suspect that the server will behave slightly differently if the communication is lost while sending the response vs. while waiting for a FIN-ACK, and I want to know which timings of disconnections I can distinguish.
I tried simulating an http request using scapy, and stopping communication between each TCP packet. (I.e.: first send SYN then disappear; then send SYN and receive SYN-ACK and then disappear; then send SYN and receive SYN-ACK and send ACK and then disappear; etc.) However, I quickly got bogged down in the details of trying to reproduce a functional TCP stack.
Is there a good existing tool to automate/enable this kind of testing?
Unless your application is actually responding to and generating its own IP packets (which would be incredibly silly), you probably don't need to do testing at that layer. Simply testing at the TCP layer (e.g, connect(), send(), recv(), shutdown()) will probably be sufficient, as those events are the only ones which your server will be aware of.

Categories