I'm trying to write a scapy script which can make an average on the ping time, so I need to get the time elapsed between ICMP echo/reply packet sent and reply packet received. For now, I have this:
#! /usr/bin/env python
from scapy.all import *
from time import *
def QoS_ping(host, count=3):
packet = Ether()/IP(dst=host)/ICMP()
t=0.0
for x in range(count):
t1=time()
ans=srp(packet,iface="eth0", verbose=0)
t2=time()
t+=t2-t1
return (t/count)*1000
The problem is that using time() function doesn't rise a good result. For example, I find 134 ms on one domain, and with the ping system function on the same domain, I have found 30 ms (average of course).
My question is: Is there a way to get the exactly time elpased beetween sent packet and received packet by scapy?
I don't want to use popen() function or other system call because I need scapy for futur packet management.
Is there a way to get the exactly time elpased beetween sent packet and received packet by scapy?
You can use pak.time and pak.sent_time
I modified your script to use them...
import statistics
import os
from scapy.all import Ether, IP, ICMP, srp
if os.geteuid() > 0:
raise OSError("This script must run as root")
ping_rtt_list = list()
def ping_addr(host, count=3):
packet = Ether()/IP(dst=host)/ICMP()
t=0.0
for x in range(count):
x += 1 # Start with x = 1 (not zero)
ans, unans = srp(packet, iface="eth0", filter='icmp', verbose=0)
rx = ans[0][1]
tx = ans[0][0]
delta = rx.time - tx.sent_time
print("ping #{0} rtt: {1} second".format(x, round(delta, 6)))
ping_rtt_list.append(round(delta, 6))
return ping_rtt_list
if __name__=="__main__":
ping_rtt_list = ping_addr('172.16.15.1')
rtt_avg = round(statistics.mean(ping_rtt_list), 6)
print("Avg ping rtt (seconds):", rtt_avg)
An example run:
$ sudo /opt/virtual_env/py37_test/bin/python ./ping_w_scapy.py
ping #1 rtt: 0.002019 second
ping #2 rtt: 0.002347 second
ping #3 rtt: 0.001807 second
Avg ping rtt (seconds): 0.002058
BTW, using unbuffered python (python -u to start it) increases the timing accuracy as python is not waiting for the buffers to decide to dump. Using your above script, it changed my results from being off by 0.4 ms to being off by 0.1-ish.
Mike, just a small fix in order to get the average time, change:
print "Average %0.3f" % float(match.group(1))
to:
print "Average %0.3f" % float(match.group(2))
since (match.group(1)) will get the min time and not the avg as mentioned.
Related
I am analysing a pcap file using Python and Scapy.
Currently, I have it counting the number of packets
I would like to count the number of SYN and ACK packets, is there a way to do this?
My main piece of code thus far is
for (pkt_data, pkt_metadata,) in RawPcapReader(file_name):
count+=1
the code is the folowing:
import scapy.all as scapy
from scapy.layers.inet import TCP
pkt_count = 0
pkt_tcp_ack_count = 0
pkt_tcp_syn_count = 0
for pkt in scapy.PcapReader(file_name):
pkt_count += 1
if TCP in pkt:
if "A" in pkt[TCP].flags:
pkt_tcp_ack_count += 1
if "S" in pkt[TCP].flags:
pkt_tcp_syn_count += 1
print("pkt_count: %d" % pkt_count)
print("pkt_tcp_ack_count: %d" % pkt_tcp_ack_count)
print("pkt_tcp_syn_count: %d" % pkt_tcp_syn_count)
now, a bit of context.
Scapy is building all the layers, so you can simply query for their presence in the packet.
for a given packet you can run:
pkt.show()
which show you how the packet has been decoded by scapy
I'm trying to write a program to test data transfer speeds for various-sized packets in parallel. I noticed something odd, though, that the size of the packet seemed to have no effect on transfer time according to my program, whereas the Unix ping binary would time out on some of the packet sizes I'm using. I was sending 4 packets containing the string 'testquest' and one that was just 2000 bytes set to 0. However, when I printed the results, they all contained 'testquest' (and were far shorter than 2000 bytes). The only thing I can conclude is that these sockets are somehow all receiving the same packet, which would explain how they all had the same rtt.
I made this MCVE to illustrate the issue (you can ignore the 'checksum' function, it's included for completeness but I know from experience that it works):
#!/usr/bin/env python3
import socket
import struct
import time
from multiprocessing.pool import ThreadPool as Pool
from sys import argv, byteorder
def calculate_checksum(pkt):
"""
Implementation of the "Internet Checksum" specified in RFC 1071 (https://tools.ieft.org/html/rfc1071)
Ideally this would act on the string as a series of 16-bit ints (host
packed), but this works.
Network data is big-endian, hosts are typically little-endian,
which makes this much more tedious than it needs to be.
"""
countTo = len(pkt) // 2 * 2
total, count = 0, 0
# Handle bytes in pairs (decoding as short ints)
loByte, hiByte = 0, 0
while count < countTo:
if (byteorder == "little"):
loByte = pkt[count]
hiByte = pkt[count + 1]
else:
loByte = pkt[count + 1]
hiByte = pkt[count]
total += hiByte * 256 + loByte
count += 2
# Handle last byte if applicable (odd-number of bytes)
# Endianness should be irrelevant in this case
if countTo < len(pkt): # Check for odd length
total += pkt[len(pkt) - 1]
total &= 0xffffffff # Truncate sum to 32 bits (a variance from ping.c, which
# uses signed ints, but overflow is unlikely in ping)
total = (total >> 16) + (total & 0xffff) # Add high 16 bits to low 16 bits
total += (total >> 16) # Add carry from above (if any)
return socket.htons((~total) & 0xffff)
def ping(args):
sock, payload = args[0], args[1]
header = struct.pack("!BBH", 8, 0, 0)
checksum = calculate_checksum(header+payload)
header = struct.pack("!BBH", 8, 0, checksum)
timestamp = time.time()
sock.send(header+payload)
try:
response = sock.recv(20+len(payload))
except socket.timeout:
return 0
return (len(response), (time.time() - timestamp) * 1000)
host = argv[1] # A host that doesn't respond to ping packets > 1500B
# 1 is ICMP protocol number
sockets = [socket.socket(socket.AF_INET, socket.SOCK_RAW, proto=1) for i in range(12)]
for i, sock in enumerate(sockets):
sock.settimeout(0.1)
sock.bind(("0.0.0.0", i))
sock.connect((host, 1)) # Port number should never matter for ICMP
args = [(sockets[i], bytes(2**i)) for i in range(12)]
for arg in args:
print(ping(arg))
arg[0].close()
This actually shows me something more troubling - it seems that the rtt is actually decreasing with increasing packet size! Calling this program (as root, to get socket permissions) outputs:
0
0
(24, 15.784025192260742)
(28, 0.04601478576660156)
(28, 0.025033950805664062)
(28, 0.033855438232421875)
(28, 0.03528594970703125)
(28, 0.04887580871582031)
(28, 0.05316734313964844)
(28, 0.03790855407714844)
(28, 0.0209808349609375)
(28, 0.024080276489257812)
but now notice what happens when I try to send a packet of size 2048 using ping:
user#mycomputer ~/src/connvitals $ time ping -c1 -s2048 $box
PING <hostname redacted> (<IP address redacted>): 2048 data bytes
--- <hostname redacted> ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss
real 0m11.018s
user 0m0.005s
sys 0m0.008s
Not only is the packet dropped, but it takes 11 seconds to do so! So why - if my timeout is set to 100ms - is this packet getting a "successful" response from my python script in only ~0.04ms??
Thank you in advance for any help you can provide.
Update:
I just checked again, and it seems that it's multiple sockets that are the problem, and the threading seems to have nothing to do with it. I get the same issue when I ping with each socket - then immediately close it - sequentially.
All your sockets are identical, and all bound to the same host. There simply isn't any information in the packet for the kernel to know which socket to go to, and raw(7) seems to imply all sockets will receive them.
You're probably getting all the responses in all the threads, meaning you're getting 12 times as many responses per thread as you're expecting.
I'm trying to do a TCP ACK Spoofing. I sniff one ACK packet from a pcap file and send it in a loop incrementing its ACK number as well as another option field.
Sniffing Part: (Prespoofing)
from scapy.all import *
from struct import unpack, pack
pkt = sniff(offline="mptcpdemo.pcap", filter="tcp", count=15)
i=6
while True:
ack_pkt = pkt[i]
if ack_pkt.sprintf('%TCP.flags%') == 'A':
break
i+=1
del ack_pkt.chksum
del ack_pkt[TCP].chksum
print ack_pkt.chksum, ack_pkt[TCP].chksum
hex2pkt = ack_pkt.__class__
Spoofing Part: (Non Optimized)
count=1
while count<5:
ack_pkt[TCP].ack += 1
pkt_hex = str(ack_pkt)
rest = pkt_hex[:-4]
last_4_bit = unpack('!I',pkt_hex[-4:])[0]
new_hex_pkt = rest + pack('>I',(last_4_bit+1))
new_pkt=hex2pkt(new_hex_pkt)
#sendp(new_pkt, verbose=0)
print new_pkt.chksum, new_pkt[TCP].chksum
count+=1
The output comes like this: (Which is changing)
None None
27441 60323
27441 58895
27441 57467
27441 56039
After sending, The average time gap between two packets is around 15 ms. (For 1000 Packets)
When I check it with Wireshark, it shows "checksum is correct" for the 1st packet and "incorrect" for others.
Spoofing Part: (Little bit Optimized)
pkt_hex=str(ack_pkt)
rest1=pkt_hex[:42]
tcp_ack=unpack('!I',pkt_hex[42:46])[0]
rest2=pkt_hex[46:-4]
last_4_bit = unpack('!I',pkt_hex[-4:])[0]
count=1
while count<5:
new_hex_pkt = rest1 + pack('>I',(tcp_ack+1)) + rest2 + pack('>I',(last_4_bit+1))
new_pkt = hex2pkt(new_hex_pkt)
#sendp(new_pkt, verbose=0)
print new_pkt.chksum, new_pkt[TCP].chksum
count+=1
The output comes like this: (Which is not changing)
None None
27441 61751
27441 61751
27441 61751
27441 61751
After sending, The average time gap between two packets is around 10 ms. (For 1000 Packets)
The Checksum is not changing for the 2nd case. The process is quite same. Then what is the problem in the 2nd optimized case? And why the TCP checksum calculated in a loop are wrong for subsequent packets?
Note:
last_4_bit is not the checksum field.
I'm able to see the ack number of the packets being incremented in tcpdump.
After a extended testing, I saw that, del ack_pkt[TCP].checksum deletes the checksum. But while converting to hex string with str(ack_pkt), I guess, it recalculates the checksum. After trying:
ack_pkt = sniff(offline="mptcpdemo.pcap", filter="tcp", count=15)[14]
del ack_pkt[TCP].chksum
print ack_pkt[TCP].chksum
print str(ack_pkt)
It 1st prints the checksum as None. But while printing the hex string, I'm able to see that the checksum field is non zero and contains the actual recalculated checksum.
In the non-optimized code, inside the loop, the packet is converted to hex-string and hence it's re-calculating the checksum each time. But in the optimized version, conversion is outside the loop and hence it carries one value only.
I am trying to ping a few hundred devices in Python using multiprocessing.pool on Windows as part of a larger program. The responses are parsed into success cases and failure cases (i.e request times out, or the host is unreachable).
The code below works fine, however, another part of the program takes the average from the response and calculates a rolling average with previous results fetched from a database.
The rolling average function fails very occasionally on int(new_average) because the new_average passed in is a None type. Note that the rolling average function is only calculated in success cases.
I think the error must be in the parse function (this seems unlikely), or with how I am using multiprocessing.pool.
My question: am I using multiprocessing correctly? More generally, is there a better way to implement this asynchronous ping? I have looked at using Twisted, but I didn't see any ICMP protocol implementation (there is txnettools on GitHub, but I am not sure of the correctness of this, and it doesn't look maintained anymore).
A node object looks like:
class Node(object):
def __init__(self, ip):
self.ip = ip
self.ping_result = None
# other attributes and methods...
Here is the idea of the async ping code:
import os
from multiprocessing.pool import ThreadPool
def parse_ping_response(response):
'''Parses a response into a list of the format:
[ip_address, packets_lost, average, success_or_failure]
Ex: ['10.10.10.10', '0', '90', 'success']
Ex: [None, 5, None, 'failure']
'''
reply = re.compile("Reply\sfrom\s(.*?)\:")
lost = re.compile("Lost\s=\s(\d*)\s")
average = re.compile("Average\s=\s(\d+)m")
results = [x.findall(response) for x in [reply, lost, average]]
# Get reply, if it was found. Set [] to None.
results = [x[0] if len(x) > 0 else None for x in results]
# Check for host unreachable error.
# If we cannot find an ip address, the request timed out.
if results[0] is None:
return results + ['failure']
elif 'Destination host unreachable' in response:
return results + ['failure']
else:
return results + ['success']
def ping_ip(node):
ping = os.popen("ping -n 5 "+node.ip,"r")
node.ping_result = parse_ping_response(ping.read())
return
def run_ping_tests(nodelist):
pool = ThreadPool(processes=100)
pool.map(ping_ip, nodelist)
return
if __name__ == "__main__":
# nodelist is a list of node objects
run_ping_tests(nodelist)
An example ping response for reference (from the Microsoft docs):
Pinging 131.107.8.1 with 1450 bytes of data:
Reply from 131.107.8.1: bytes=1450 time<10ms TTL=32
Reply from 131.107.8.1: bytes=1450 time<10ms TTL=32
Ping statistics for 131.107.8.1:
Packets: Sent = 2, Received = 2, Lost = 0 (0% loss),
Approximate roundtrip times in milliseconds:
Minimum = 0ms, Maximum = 10ms, Average = 2ms
I recommend you to use gevent (http://www.gevent.org, asynchronous I/O library based on libev and greenlet coroutines) instead of multiprocessing.
It turns out there an implementation of ICMP for gevent:
https://github.com/mastahyeti/gping
Is it possible to perform the the simplest SMTP session using Scapy?
I tried to read captured with tcpdump using Scapy and to send packets, but no luck...
This is what I have
#!/usr/bin/python
from scapy.all import *
from scapy.layers.inet import IP, TCP
source_ip = '1.2.3.4'
source_port = 5100
source_isn = 1000
dest_ip = '1.2.3.5'
dest_port = 25
ip=IP(src=source_ip, dst=dest_ip)
SYN=TCP(sport=source_port, dport=dest_port, flags="S", seq=source_isn)
SYNACK=sr1(ip/SYN)
source_isn = SYN.seq + 1
source_ack = SYNACK.seq + 1
CK=TCP(ack=source_ack, sport=source_port, dport=dest_port, flags="A",seq=source_isn)
handshakedone=sr1(ip/ACK)
DTA=TCP(ack=handshakedone.seq+len(handshakedone.load), seq=source_isn, sport=source_port,dport=dest_port,flags="PA")
sr(ip/DTA/Raw(load='mail from: test#gmail.com\r\n'))
send(ip/DTA/Raw(load='rcpto to: me#gmail.com\r\n'))
source_isn = ACK.seq + len(mfrom)
.....
RST=TCP(ack=SYNACK.seq + 1, seq=source_isn, sport=source_port, dport=dest_port, flags="RA")
send(ip/RST)
Handshake is successful but what ACK and SEQ values should be during the session? How can I calculate them?
TCP seq and ack numbers are described in rfc 793 (start at page 24). The whole spec is too long to post here, but basically, every byte of payload has a sequence number. In addition to the payload bytes, there are two control flags (SYN and FIN) that get their own sequence numbers. Initial sequence numbers should be randomized, but don't really matter if you're just playing around. The ack number in your packet is the next sequence number you expect to receive, and the seq field in the packet is the first sequence number in the segment.
So to ack all packets up to a given one, add the sequence number from the given packet to its length (including FIN or SYN flags, if set) and put that in your ack field.