I'm writing a port scanner using scapy, but I'm finding that it is horrendously slow. I use a single line of code to actually do the scan:
ans, unans = sr(IP(dst=targetIP)/TCP(dport=(1, 49151), flags='S'))
And it takes about 15 minutes to run, even though I'm on the same LAN as the computer I'm scanning. Heck, I'm plugged into the same SWITCH as my target!
I tried multi-threading, but that actually made it slower. Using multiple processes is faster, but only to a certain point. Either scapy's sniffer can't keep up and it is losing packets, or the network itself is dropping packets (Not likely, considering nmap works fine). In either case, using 5 processes, I got the TCP scan time down to about 5-6 minutes, which while is 1/3rd the time it takes to run it in a single process, is still much slower than the ~10 seconds nmap takes.
Anyone know any other tricks to speed up Scapy port scans of large ranges?
Note that in your example, you had forgotten the timeout parameter, which is crucial: without it, scapy will wait to have recieved an answer for each packet you have send, which in your case will never happend !
As of 2018 (2.3.3dev (github version)), running
ans, unans = sr(IP(dst=targetIP)/TCP(dport=(1, 49151), flags='S', timeout=2))
Takes approximately 90 sec. The pending PR https://github.com/secdev/scapy/pull/1142 speed that up to around 50sec.
Related
I'm working with python in order to receive a stream of UDP packets from an FPGA, trying to lose as few packets as possible.
The packet rate goes from around 5kHz up to some MHz and we want to take data in a specific time window (acq_time in the code).
We have this code now:
BUFSIZE=4096
dataSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
dataSock.settimeout(0.1)
dataSock.bind((self.HOST_IP, self.HOST_PORT))
time0=time.time()
data_list = []
while time.time()-time0<acq_time:
fast_acquisition(data_list)
def fast_acquisition(data_list_tmp):
data, addr = dataSock.recvfrom(self.BUFSIZE)
data_list_tmp.append(data)
return len(data)
And after the acquisition we save our data_list on disk.
This code is meant to be as simple and fast as possible, but it's still too slow and we lose too many packets even at 5kHz, and we think that this happens because while we read, and store in the list one packet and check the time, the next one (ore ones) arrives and is lost.
Is there any way to keep the socket open? Can we open multiple sockets "in series" with parallel processing, so that when we are saving the file from the first the second can receive another packet?
We can even think to use another language only to receive and store the packets on disk.
You could use tcpdump (which is implemented in C) to capture the UDP traffic, since it's faster than python:
#!/bin/bash
iface=$1 # interface, 1st arg
port=$2 # port, 2nd arg
tcpdump -i $iface -G acq_time_in_seconds -n udp port $port -w traffic.pcap
And then you can use e.g. scapy to process that traffic
#!/usr/bin/env python
from scapy.all import *
scapy_cap = rdpcap('traffic.pcap')
for packet in scapy_cap:
# process packets...
There are several reasons why UDP packets can be lost, and certainly the speed of being able to take them off the socket queue and store them can be a factor, at least eventually. However, even if you had a dedicated C language program handling them, it's unlikely you'll be able to receive all the UDP packets if you expect to receive more than a million a second.
The first thing I'd do is to determine if python performance is actually your bottleneck. It is more likely in my experience that, first and foremost, you're simply running out of receive buffer space. The kernel will store UDP datagrams on your socket's receive queue until the space is exhausted. You might be able to extend that capacity a little with a C program, but you will still exhaust the space faster than you can drain the socket if packets are coming in at high enough speed.
Assuming you're running on linux, take a look at this answer for how to configure the socket's receive buffer space -- and examine the system-wide maximum value, which is also configurable and might need to be increased.
https://stackoverflow.com/a/30992928/1076479
(If you're not on linux, I can't really give any specific guidance, although the same factors are likely to apply.)
It is possible that you will be unable to receive packets fast enough, even with more buffer space and even in a C program. In that case, #game0ver's idea of using tcpdump might work better if you only need to withstand a short intense burst of packets as it uses a much lower-level interface to obtain packets (and is highly optimized). But then of course you won't just have the UDP payload, you'll have entire raw packets and will need to strip the IP and Ethernet layer headers as well before you can process them.
I'm trying to get a looping call to run every 2 seconds. Sometimes, I get the desired functionality, but othertimes I have to wait up to ~30 seconds which is unacceptable for my applications purposes.
I reviewed this SO post and found that looping call might not be reliable for this by default. Is there a way to fix this?
My usage/reason for needing a consistent ~2 seconds:
The function I am calling scans an image (using CV2) for a dollar value and if it finds that amount it sends a websocket message to my point of sale client. I can't have customers waiting 30 seconds for the POS terminal to ask them to pay.
My source code is very long and not well commented as of yet, so here is a short example of what I'm doing:
#scan the image for sales every 2 seconds
def scanForSale():
print ("Now Scanning for sale requests")
#retrieve a new image every 2 seconds
def getImagePreview():
print ("Loading Image From Capture Card")
lc = LoopingCall(scanForSale)
lc.start(2)
lc2 = LoopingCall(getImagePreview)
lc2.start(2)
reactor.run()
I'm using a Raspberry Pi 3 for this application, which is why I suspect it hangs for so long. Can I utilize multithreading to fix this issue?
Raspberry Pi is not a real time computing platform. Python is not a real time computing language. Twisted is not a real time computing library.
Any one of these by itself is enough to eliminate the possibility of a guarantee that you can run anything once every two seconds. You can probably get close but just how close depends on many things.
The program you included in your question doesn't actually do much. If this program can't reliably print each of the two messages once every two seconds then presumably you've overloaded your Raspberry Pi - a Linux-based system with multitasking capabilities. You need to scale back your usage of its resources until there are enough available to satisfy the needs of this (or whatever) program.
It's not clear whether multithreading will help - however, I doubt it. It's not clear because you've only included an over-simplified version of your program. I would have to make a lot of wild guesses about what your real program does in order to think about making any suggestions of how to improve it.
I've written a server python script in Windows 7 to send Ethernet UDP packets to a UNIX system running a C client receiving program that sends the message back to the server. However, sometimes (not always) a message in the last port (and always the last port) that python sends to won't arrive until the next batch of 4 messages is sent. This causes the timing of the message received for the last port incorrect to when it was sent, and I cannot have two messages back to back on the same port.
I have been able to verify this in Wireshark by locating two messages that arrived around the same time because the one that wasn't received was processed with the other. I have also checked the timing right after the recv() function and it shows a long delay and then a short delay because it basically had two packets received.
Things I have done to try to fix this, but has help me explain the problem or how to solve it: I can add a delay in between each sendto() and I will successfully send and receive all messages with correct timing but I want the test to work the way I've written it below; I've increased the priority of the receive thread thinking that my Ethernet receive was not getting signal to pick up the package or that some process was taking too long, but this didn't work and 20ms should be WAY more than necessary to process the data; I have removed ports C and D, then port B misses messages (Only having one port doesn't caause issues), I thought reducing the number of ports would improve timing; Sending to a dummy PORTE immediately after PORTD lets me receive all of the messages with correct timing (I assume the problem is transferred to PORTE); I have also reproduced the python script in a UNIX environment and C code and have had the same issue, pointing me to a receiving issue; I've also set my recv function to time out every 1ms hoping that it could recover somehow even though the timing would be off a bit, but I still saw messages back to back. I've also checked that no UDP packets have been dropped and that the buffer is large enough to hold those 4 messages. Any new ideas would help.
This is the core of the code, the python script will send 4 packets. One 20 byte message to a corresponding waiting thread in C and delay for 20ms
A representation of the python code looks something like
msg_cnt = 5000
while cnt < msg_cnt:
UDPsocket.sendto(data, (IP, PORTA))
UDPsocket.sendto(data, (IP, PORTB))
UDPsocket.sendto(data, (IP, PORTC))
UDPsocket.sendto(data, (IP, PORTD))
time.sleep(.02)
cnt++
The C code has 4 threads waiting to receive on their corresponding ports. Essentially each thread should receive its packet, process it, and send back to the server. This process should take less than 20ms before the next set of messages arrive
void * receiveEthernetThread(){
uint8_t ethRxBuff[1024];
if((byteCnt = recv(socketForPort, ethRxBuff, 1024, 0)) < 0){
perror("recv")
}else{
//Process Data, cannot have back to back messages on the same port
//Send back to the server
}
}
I found out the reason I was missing messages a while back and wanted to answer my question. I was running the program on a Zynq-7000 and didn't realize this would be an issue.
In the Xilinx Zynq-7000-TRM, there is a known issue describing:
" It is possible to have the last frame(s) stuck in the RX FIFO with the software having no way to get the last frame(s) out of there. The GEM only initiates a descriptor request when it receives another frame. Therefore, when a frame is in the FIFO and a descriptor only becomes available at a later time and no new frames arrive, there is no way to get that frame out or even know that it exists.
This issue does not occur under typical operating conditions. Typical operating conditions are when the system always has incoming Ethernet frames. The above mentioned issue occurs when the MAC stops receiving the Ethernet frames.
Workaround: There is no workaround in the software except to ensure a continual flow of Ethernet frames."
Was fixed basically by having continuous incoming Ethernet traffic, sorry for missing that crucial information.
This causes the timing of the message received for the last port
incorrect to when it was sent, and I cannot have two messages back to
back on the same port.
The short explanation is you are using UDP, and that protocol gives no guarantees about delivery or order.
That aside, what you are describing most definitely sounds like a buffering issue. Unfortunately, there is no real way to "flush" a socket.
You will either need to use a protocol that guarantees what you need (TCP), or implement your needs on top of UDP.
I believe your real problem though is how the data is being parsed on the server side. If your application is completely dependent on a 20ms interval of four separate packets from the network, that's just asking for trouble. If it's possible, I would fix that rather than attempt fixing (normal) socket buffering issues.
A hacky solution though, since I love hacky things:
Set up a fifth socket on the server. After sending your four time-sensitive packets, send "enough" packets to the fifth port to force any remaining time-sensitive packets through. What is "enough" packets is up to you. You can send a static number, assuming it will work, or have the fifth port send you a message back the moment it starts recv-ing.
I have a Python 3 program which sends short commands to a host and gets short responses back (both 20 bytes). It's not doing anything complicated.
The socket is opened like this:
self.conn = socket.create_connection( ( self.host, self.port ) )
self.conn.settimeout( POLL_TIME )
and used like this:
while( True ):
buf = self.conn.recv( 256 )
# append buffer to bigger buffer, parse packet once we've got enough bytes
After my program has been running for a while (hours, usually), sometimes it goes into a strange mode - if I use tcpdump, I can see a response packet arriving at the local machine, but recv doesn't give me the packet until 30s (Windows) to 1m (Linux) later. The time is random +/- about ten seconds. I wondered if the packet was being delayed til the next packet arrived, but this doesn't seem to be true.
In the meantime, the same program is also operating a second socket connection using the same code on a different thread, which continues to work normally.
This doesn't happen all the time, but it's happened several times in a month. Sometimes it's preceded by a few seconds of packets taking longer and longer to arrive, but most of the time it just goes straight from OK to completely broken. Most of the time it stays broken for hours until I restart the server, but last night I noticed it recovering and going back to normal operation, so it's not irrecoverable.
CPU usage is almost zero, and nothing else is running on the same machine.
The weirdest thing is that this happens on both the Linux Subsystem for Windows (two different laptops), and on Linux (AWS tiny instance running Amazon Linux).
I had a look at the CPython implementation of socket.recv() using GDB. Looking at the source code, it looks like it passes calls to socket.recv() straight through to the underlying recv(). However, while the outer function sock_recv() (which implements socket.recv() ) gets called frequently, it only calls recv() when there's actually data to read from the socket, using the socket_call() function to call poll()/select() to see if there's any data waiting. Calls to recv() happen directly before the app receives a packet, so the delay is somewhere before that point, rather than between recv() and my code.
Any ideas on how to troubleshoot this?
(Both the Linux and Windows machines are updated to the most recent everything, and the Python is Python 3.6.2)
[edit] The issue gets even weirder. I got fed up and wrote a method to detect the issue (looking for ten late-arriving packets in a row with near-identical roundtrip times), drop the connection and reconnect (by closing the previous connection and creating a new socket object) ... and it didn't work. Even with a new socket object, the delayed packets remain delayed by exactly the same amount. So I altered the method to completely kill the thread that was running that code and restart it, reasoning that perhaps there was some thread-local state. That still didn't work. The only resort I have left is killing the entire program and having a watchdog to restart it...
[edit2] Killing the entire program and restarting it with an external watchdog worked. It's a terrible solution, but at least it's a solution.
I am working on a project for work and I am stuck on a part where I need to monitor a serial line and listen for certain words using python
so the setup is that we have a automated RAM testing machine that tests RAM one module at a time and interacts with software that came with the machine via serial. The software that came with the RAM tester is for monitoring/configuring the testing process, it also displays all of the information from the SPD chip from each module. while the RAM tester was running I ran a serial port monitoring program and I was able to see the same information that it displays in the software. The data I'm interested in is the speed of the RAM and the pass/fail result, both of which I was able to find in the data I monitored coming over the serial line. There are only 5 different speeds of RAM that we test, so I was hoping to have python monitor the serial line and wait for the speed of the RAM and the pass/fail results to come across. once python detects the speed of the RAM, and if it passes, I will have python write to an Arduino, and the Arduino will control a conveyor belt that will sort the ram by speed.
my idea is to have a variable for each of the RAM speeds and set the variables to 0. when python detects the RAM speed from the serial line it will set the corresponding variable to 1. then when the test is over the results, either pass or fail, will come over the serial line. this is where I am going to try to use a if statement. I imagine it would look something like this:
if PC-6400 == 1 and ser.read() == pass
ser.write(PC-6400) #serial write to the arduino
I know the use of the ser.read() == pass is incorrect and that's where I'm stuck. I do not know how to use a ser.read() function to look for certain words. I need it to look for the ram speed (in this case its PC-6400) and the word pass but I have not been successful in getting it to find either. I am currently suck in is getting python to detect the RAM speed so it can change the value of the variable. would it be something close to this?
if ser.read() == PC-6400
PC-6400 = 1
This whole thing is a bit difficult for me to explain and I hope it all makes sense, I thank you in advance if anyone can give me some advice on how to get this going. I am pretty new to python and this is the most adventurous project I have worked on using python so far.
I'm still a bit confused, but here's a very basic example to hopefully get you started.
This will read from a serial port. It will attempt to read 512 bytes (which just means 512 characters from a string). If 512 bytes aren't available then it will wait forever, so make sure you set a timeout when you made the serial connection.
return_str = ser.read(size = 512)
You can also see how many bytes are waiting to be read:
print "num bytes available = ", ser.inWaiting()
Once you have a string, you can check words within the string:
if "PASS" in return_str:
print "the module passed!"
if "PC-6400" in return_str:
print "module type is PC-6400"
To do something similar, but with variables:
# the name pass is reserved
pass_flag = "PASS"
PC6400 = 0
if pass_flag in return_str and "PC-6400" in return_str:
PC6400 = 1
Keep in mind that you it is possible to read part of a line if you are too quick. You can add delays by using timeouts or the time.sleep() function. You might also find you need to wait for a couple of seconds after initiating the connection as the arduino resets when you connect. This will give it time to reset before you try and read/write.