I am trying to connect to a website with requests that requires using a client certificate.
import requests
r = requests.get(url, cert='path to cert')
print(r.status_code)
This works for one site that uses the same client cert. That server is using TLS_RSA_WITH_AES_128_CBC_SHA, TLS 1.0. However my target site uses TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS 1.1. So basically the difference is TLS 1 works and TLS 1.1 doesn't. Everything works fine in browser so it must have something to do with Python's SSL.
I am using requests version 2.7.0 and I have requests[security] installed as well. pip freeze:
cffi==0.9.2
cryptography==0.8.1
ndg-httpsclient==0.3.3
pyasn1==0.1.7
pycparser==2.10
pyOpenSSL==0.15.1
requests==2.7.0
six==1.9.0
The specific error I am getting is requests.exceptions.SSLError: [SSL: TLSV1_ALERT_INTERNAL_ERROR] tlsv1 alert internal error (_ssl.c:600). This is on Windows 7 with Python 3.4.3. Unfortunately this is on an internal machine so I am stuck with Windows and our internal mirror of PyPi does not have the latest versions of everything. It seems to me like this has something to do with ssl failing and not necessarily requests.
Google does not give back promising results. There is this StackOverflow post that describes the same problem, but the solution provided (using a custom adapter) does not work for me.
Hopefully someone else has run into this before and can give me some tips on how to fix it. Please and thanks.
EDIT: I did a wireshark capture of the interaction. The SSL alert sent back is "Level: Fatal (2) Description: Internal Error (80)". After the TCP connection start, my machine sends a client hello.
Content Type: Handshake (22)
Version: TLS 1.0 (0x0301)
Length: 512
Then the handshake protocol segment of that packet is
Handshake Type: Client Hello (1)
Length: 508
Version: TLS 1.2 (0x0301)
followed by a list of the supported cipher suites, etc. I looked in the list of cipher suites sent by my client and TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA is listed. The server ACKs this message then sends the Alert packet.
I got rid of an identical SSLError by removing the first entry ECDH+AESGCM from requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS, with which the server seemed to have problems. The line
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS = 'DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+HIGH:DH+HIGH:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+HIGH:RSA+3DES:!aNULL:!eNULL:!MD5'
solved the problem for me.
For me, request.request('GET'... instead of request.get(... works.
And I got rid of the above SSLError by removing almost all the first entry:
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS = 'RSA+AESGCM:RSA+AES:RSA+HIGH:RSA+3DES:!aNULL:!eNULL:!MD5'
Related
I am wanting to use Python to retrieve the remote server certificate (not validate or check it in any way). I have retrieved the server certificate using both methods ``ssl.get_server_certificateandSSLSocket.getpeercert`.
The main reason I had to try SSLSocket.getpeercert over ssl.get_server_certificate was that the timeout value on the TLS handshake was not being honored with the ssl.get_server_certificate. One of the hosts I was trying to the the server certificate had some problem and would hang my python script during the TLS handshake and only the SSLSocket.getpeercert would time this out.
I also notice I cannot retrieve the server certificates from very old systems that use TLS 1.0 or even SSL with SSLSocket.getpeercert and there is no place to specify to the ssl_version like there is in ssl.get_server_certificate.
So I see both methods retrieve the server certificate and each seems to have different issues. But what are the differences with what each does? When would I use one over the other?
Long story short, I have a project that requires me to create a controller from scratch in python and handle requests from switches created through a mininet topology following the Open Flow Protocol.
Helpful Open Flow Protocol resources:
https://www.opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.5.1.pdf
http://flowgrammable.org/sdn/openflow/message-layer/
http://flowgrammable.org/sdn/openflow/message-layer/statsrequest/#ofp_1_3
My code is available here on github for cloning and full transparency:
[removed as of 10/12/2019, see my answer below]
The issue I am running into is that I am unable to send a multipart-request message for the port stats description (search PortDesc on this link). I do not know why this is the case, but when I view the packet data in wireshark I get a "Range is out of bounds" error. I haven't been able to figure out why this is the case. Here are a few screenshots of the packet data:
Wireshark captures:
Lua error messages:
Bad Request Error Message Response:
Something to note here is that the code says OFPBRC_BAD_LEN (6), but the length of bytes sent in the multi-part request are of size 16.
A classmate that sent their packet data correctly has said that they were using the same packing structure that I am, except theirs is successful (see the python struct documentation). I don't know what the issue could be with mine and I am running out of ideas to check. Any pointers would be greatly appreciated.
TL;DR: I am unable to send a multipart request and even though I am adhering to the request specifications, results keep returning with an error code. Error in wireshark says "Range out of bounds" and I do not know how else to structure my request to correct this error message.
I solved my problem, but I don't think I have an answer as to what the problem was. First I'll start with my solution, and then talk about what I believe the problem is.
Solution:
As you can see in the screenshots above, I was sending OpenFlow packets in the version 1.5 protocol, which is the newest version, but visiting openflow message layer documentation only shows documentation up to 1.4.
Ontop of that, the latest version of multipart request that the documentation shows is 1.3.1. Even when I send a multipart request for Open Flow Protocol version 1.5 it wasn't showing up as the OpenFlowProtocol, instead as a regular TCP packet. I did the following 3 things:
In my topology file, where I create the switch, I was initializing
the switches as s1 = self.addSwitch( 's1'). What I added to this
statement was the protocol parameter: s1 = self.addSwitch( 's1',
protocols='OpenFlow14').
For good measure I also added protocols specification to the
mininet command in the console: sudo mn --custom mytopo.py --topo
mytopo
--controller=remote,ipaddr=127.0.0.1,port=6653,protocols=OpenFlow14
I also changed how I was packing the requests, so instead of
specifying version 1.5 (which is a '06' in the packet header), I
packed it as 1.4 (which is a '05' in the packet header). req =
struct.pack('!BBHI',5,5,8,0) (ex. for the feature_request message
sent to the switch).
These steps solved the issue I was running into and I was able to get a stats_reply from the switch.
Problem (or what I think the problem is):
I believe the problem was that AS OF RIGHT NOW, Open Flow Version 1.5 doesn't have support yet for the multipart request, as evidenced by when sending a multipart request for the port description, it shows a regular TCP protocol instead of the OpenFlow protocol.
My code :-
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("www.python.org" , 80))
s.sendall(b"GET https://www.python.org HTTP/1.0\n\n")
print(s.recv(4096))
s.close()
Why the output shows me this:-
b'HTTP/1.1 500 Domain Not Found\r\nServer: Varnish\r\nRetry-After: 0\r\ncontent-type: text/html\r\nCache-Control: private, no-cache\r\nconnection: keep-alive\r\nContent-Length: 179\r\nAccept-Ranges: bytes\r\nDate: Tue, 11 Jul 2017 15:23:55 GMT\r\nVia: 1.1 varnish\r\nConnection: close\r\n\r\n\n\n\nFastly error: unknown domain \n\n\nFastly error: unknown domain: . Please check that this domain has been added to a service.'
How can I fix it?
This is wrong on multiple levels:
to access a HTTPS resource you need to create a TLS connection (i.e. ssl_wrap on top of an existing TCP connection, with proper certificate checking etc) and then send the HTTP request. Of course the TCP connection in this case should go to port 443(https) not 80 (http).
the HTTP request should only contain the path, not the full URL
the line end must be \r\n not \n
you better send a Host header too since many severs require it
And that's only the request. Properly handling the response is a different topic.
I really really recommend to use an existing library like requests. HTTP(S) is considerably more complex as most think who only had a look at a few traffic captures.
import requests
x = requests.get('https://www.python.org')
print x.text
With the requests library, HTTPS requests are very simple! If you're doing this with raw sockets, you have to do a lot more work to negotiate a cipher and etc. Try the above code (python 2.7).
I would also note that, in my experience, Python is excellent for doing things quickly. If you are learning about networking and cryptography, try writing a HTTPS client on your own using sockets. If you want to automate something quickly, use the tools that are available to you. I almost always use requests for this type of task. As an additional note, if you're interested in parsing HTML content, check out the PyQuery library. I've used it to automate interaction with many web services.
Requests
PyQuery
I am trying to access this site with Python Httplib2:
https://www.talkmore.no/talkmore3/servlet/Login
But I get this error:
httplib2.SSLHandshakeError: [Errno 1] _ssl.c:510: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
This is the python code I use:
login = "user"
pwd = "pass"
headers = {'Content-type': 'application/x-www-form-urlencoded'}
data = {'username':login, 'password':pwd}
h = httplib2.Http(".cache", disable_ssl_certificate_validation=True)
resp, content = h.request("https://www.talkmore.no/talkmore3/servlet/Login", "POST", urlencode(data))
I have tried with other libraries, but the same error occurs..
The server itself is fine and supports TLS1.0...TLS1.2 (but no SSL 3.0). It also supports commonly used ciphers and using your python code gives no errors for me. This means that you either have some old and buggy version of python/OpenSSL installed (details for versions are missing in the question) or that there is some middlebox in between which stops the connection (i.e. firewall or similar).
Please try to access the same https-site with a normal browser from the same machine to see if you get the same problems. If yes then there is some middlebox blocking the data. If the browser succeeds please make a packet capture (with tcpdump or similar) to look at the differences between the data sent by the browser and your test program and thus narrow down what the underlying problem might be.
Our client wants a client script that will be installed on their customers' computers to be as trivial to install as possible. This means no extra-install packages, in this case PyCurl.
We need to be able to connect to a website using SSL and expecting a client certificate. Currently this is done calling Curl with os.system() but to get the http return code doing this it looks like we'll have to use the '-v' option to Curl and comb through this output. Not difficult, just a bit icky.
Is there some other way to do this using the standard library that comes with Python 2.6?
I read everything I could find on this and I couldn't see a non-Curl way of doing it.
Thanks in advance for any guidance on this subject whatsoever!
this will do the trick. Note that Verisign don't require a client certificate, it's just a randomly taken HTTPS site.
import httplib
conn = httplib.HTTPSConnection('verisign.com', key_file='./my-key.pem', cert_file='./my-cert.pem')
conn.connect()
conn.request('GET', '/')
conn.set_debuglevel(20)
response = conn.getresponse()
print('HTTP status', response.status)
EDIT: Just for the posterity, Bruno's comment below is a valid one and here's an article how to roll it using the stdlib's socket ssl and socket modules in case it's needed.
EDIT2: Seems I cannot post links - just do a web search for 'Validating SSL server certificate with Python 2.x another day'