I have to filter and modify network traffic using Linux kernel libnetfilter_queue (precisely the python binding) and dpkt, and i'm trying to implement delayed packet forward.
Normal filtering works really well, but if i try to delay packets with function like this
def setVerdict(pkt, nf_payload):
nf_payload.set_verdict_modified(nfqueue.NF_ACCEPT, str(pkt), len(pkt))
t = threading.Timer(10, setVerdict, [pkt, nf_payload])
t.start()
It crashs throwing no exception (surely is a low level crash). Can i implement delay using directly libnetfilter like this or I must copy pkt, drop it and send the copy using standard socket.socket.send()?
Thank you
Sorry for the late reply, but I needed to do something like this, although slightly more complicated. I used the C-version of the library and I copied packets to a buffer inside my program, and then issued a DROP verdict. After a timeout relating to your delay, I reinject the packet using a raw socket. This works fine, and seems quite efficient.
I think the reason for your crash was due to the fact that you didnt issue a verdict fast enough.
I can't answer your question, but why not use the "netem" traffic-queue module on the outgoing interface to delay the packet?
It is possible to configure tc queues to apply different policies to packets which are "marked" in some way; the normal way to mark such packets is with a netfilter module (e.g. iptables or nfqueue).
Related
I am trying to simulate a communication protocol where I am following a pattern, so I constantly loop though looking for the same set of characters to reply information. I'm using an RS-232 adapter and the protocol I am simulating is asynchronous and half-duplex where the rx/tx lines are tied together by design and that causes a sort of echo when reading after writing.
That said, I need to be able to clear the input buffer after every write I send out in order to avoid reading what I just wrote. So whenever I use reset_input_buffer() it does not clear the last message I sent out. I have tried to fix this using a couple of methods, such as: using reset_output_buffer() together with reset_input_buffer(), using reset_input_buffer() twice, and using flush(). None of these methods make any difference, the only other method that works to clear the buffer is closing and immediately opening the port but this causes a delay that messes with the timing as it is critical at certain times.
I'm open to any suggestions, please help!
I am using Scapy to capture Wi-Fi client request frames. I am only interested in the MAC and requested SSID address of the clients. I do something like the following.
sniff(iface="mon0", prn=Handler)
def Handler(pkt):
if pkt.hasLayer(Dot11):
if pkt.type == 0 and pkt.subtype == 4:
print pkt.addr2 + " " + pkt.info
My issue is that I am doing this on a embedded device with limited processing power. When I run my script, my processor utilization rises to nearly 100%. I assume this is because of the sheer volume of frames that Scapy is sniffing and passing to my Python code. I also assume that if I could employ the right filter in my sniff command, I could eliminate many of the frames that are not being used and thus reduce the processor load.
Is there a filter statement that could be used to do this?
With Scapy it is possible to sniff with an BPF Filter applied to the capture. This will filter out packets at a much lower level then your Handler function is doing, and should significantly improve performance.
There is a simple example in the Scapy documentation.
# Could also add a subtype to this.
sniff(iface="mon0", prn=Handler, filter="type mgt")
Filter pieced together from here specifically.
Unfortunately I can't test this right now, but this information should provide you with a stepping stone to your ultimate solution, or someone else to post exactly what you need. I believe you will also need to set the interface to monitor mode.
You may also find this question of interest - Accessing 802.11 Wireless Management Frames from Python.
Scapy is extremely slow due to the way it decodes the data. You may
Use a BPF-filter on the input to only get the frames you are looking for before handing them to scapy. See this module for example. The module uses libpcap to get the data from the air or from a file first and passes it through a dynamically updated BPF-filter to keep unwanted traffic out
Write your own parser for wifi in c (which is not too hard, given the limited amount of information you need, there are things like prismhead though)
Use tshark from wireshark as a subprocess and collect data from there
I highly recommend the third approach although wiresharek comes with a >120mb library that your embedded device might not be able to handle.
I am looking to create a client/server application that I can use to slit network packets in half, tunnel each half of the packet over a separate udp connection (because each udp connection will be going over a different wifi link) and then reassemble the split packets on the other end. In addition to splitting the packets each half packet will also have to have an ID and sequence number so that they can be reassembled properly.
Basically I am trying to do something similar to MLPPP
I am looking to do this using python and the TUN/TAP network driver.
I have found the following python code samples and modules that I think might be helpful for this project.
Python tun/tap
http://www.secdev.org/projects/tuntap_udp/files/tunproxy.py
http://twistedmatrix.com/trac/browser/trunk/twisted/pair/tuntap.py
http://pastebin.com/gMB8zUfj
Python raw packet manipulation
http://libdnet.sourceforge.net/
http://pypi.python.org/pypi/pyip/
http://code.google.com/p/python-packet/
My question is can the necessary packet modification be done using python and what would be a possible way to approach this? Can I use the modules above to do this or is there a better solution? I am looking for some input that will steer me in the right direction as I am not an experienced programmer. Any code samples or additional links are welcome.
We are doing something like this in production and it works quite well. We don't split individual packets though. We set fractional weights for each connection (unlimited) and send the packets out. We have some code in place to deal with different latencies on each line. On the other end we buffer them and reorder. Performance is pretty good - we have sites with 5+ ADSL lines and get good speeds, 40+ Mbps on the download.
Splitting packets (eg 1500/2 = 750) would introduce unnecessary overhead... keep your packets as big as possible.
We have developed our own protocol (header format) for the UDP packets. We have done loopback testing on the tun/tap up to 200 Mbps, so definitely the kernel to user space interaction works well. Previously we used NFQUEUE but that had reliability issues.
All of the above was written in Python.
It looks perfectly possible to me.
The tun/tap modules you've discovered look like they would do the job. Twisted will be high performance and the cost of hurting your head working it all out.
As for splitting the packets, you don't need to interpret the data in any way, just treat it as a blob of binary data, split it in two and add a header - I wouldn't use any 3rd party modules for that, just normal python string handling.
Or you could use netstrings if you want an easy to use packet encapsulation format.
I don't suppose it would go like a rocket, but I'm sure you would learn lots doing it!
I'm building a download manager in python for fun, and sometimes the connection to the server is still on but the server doesn't send me data, so read method (of HTTPResponse) block me forever. This happens, for example, when I download from a server, which located outside of my country, that limit the bandwidth to other countries.
How can I set a timeout for the read method (2 minutes for example)?
Thanks, Nir.
If you're stuck on some Python version < 2.6, one (imperfect but usable) approach is to do
import socket
socket.setdefaulttimeout(10.0) # or whatever
before you start using httplib. The docs are here, and clearly state that setdefaulttimeout is available since Python 2.3 -- every socket made from the time you do this call, to the time you call the same function again, will use that timeout of 10 seconds. You can use getdefaulttimeout before setting a new timeout, if you want to save the previous timeout (including none) so that you can restore it later (with another setdefaulttimeout).
These functions and idioms are quite useful whenever you need to use some older higher-level library which uses Python sockets but doesn't give you a good way to set timeouts (of course it's better to use updated higher-level libraries, e.g. the httplib version that comes with 2.6 or the third-party httplib2 in this case, but that's not always feasible, and playing with the default timeout setting can be a good workaround).
You have to set it during HTTPConnection initialization.
Note: in case you are using an older version of Python, then you can install httplib2; by many, it is considered a superior alternative to httplib, and it does supports timeout.
I've never used it, though, and I'm just reporting what documentation and blogs are saying.
Setting the default timeout might abort a download early if it's large, as opposed to only aborting if it stops receiving data for the timeout value. HTTPlib2 is probably the way to go.
5 years later but hopefully this will help someone else...
I was wrecking my brain trying to figure this out. My problem was a server returning corrupt content and thus giving back less data than it thought it had.
I came up with a nasty solution that seems to be working properly. Here it goes:
# NOTE I directly disabling blocking is not necessary but it represents
# an important piece to the problem so I am leaving it here.
# http_response.fp._sock.socket.setblocking(0)
http_response.fp._sock.settimeout(read_timeout)
http_response.read(chunk_size)
NOTE This solution also works for the python requests ANY library that implements the normal python sockets (which should be all of them?). You just have to go a few levels deeper:
resp.raw._fp.fp._sock.socket.setblocking()
resp.raw._fp.fp._sock.settimeout(read_timeout)
resp.raw.read(chunk_size)
As of this writing, I have not tried the following but in theory it should work:
resp = requests.get(some_url, stream=True)
resp.raw._fp.fp._sock.socket.setblocking()
resp.raw._fp.fp._sock.settimeout(read_timeout)
for chunk in resp.iter_content(chunk_size):
# do stuff
Explanation
I stumbled upon this approach when reading this SO question for setting a timeout on socket.recv
At the end of the day, any http request has a socket. For the httplib that socket is located at resp.raw._fp.fp._sock.socket. The resp.raw._fp.fp._sock is a socket._fileobj (which I honestly didn't look far into) and I imagine it's settimeout method internally sets it on the socket attribute.
I'm writing some python and are stuck at the moment.
I think this "Nagle algoritm" is the problem since my packages are delayed some time for some reason to the client.
I've tried this on both client and server but it doesn't seems to work (or there's another problem causing it):
socketobj.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
Any ideas?
EDIT: A full explanation of my problem can be found here:
http://www.gamedev.net/community/forums/topic.asp?topic_id=554172&whichpage=1�
I'm not familiar with Python's sockets, but does it have a flush method? Even with Nagle's disabled, most socket implementations will buffer if you don't write X number of bytes. However, if you call flush, the bytes should be sent immediately.