I have a code as below :
headers = {'content-type': 'ContentType.APPLICATION_XML'}
uri = "www.client.url.com/hit-here/"
clientCert = "path/to/cert/abc.crt"
clientKey = "path/to/key/abc.key"
PROTOCOL = ssl.PROTOCOL_TLSv1
context = ssl.SSLContext(PROTOCOL)
context.load_default_certs()
context.load_cert_chain(clientCert, clientKey)
conn = httplib.HTTPSConnection(uri, some_port, context=context)
I am not really a network programmer, so i did some googling for handshake connection and found ssl.SSLContext(PROTOCOL) as the needed function, code works fine.
Then i hit the roadblock, my local has version 2.7.10 but all the production boxes have 2.7.3 with them, so SSLContext is not supported and upgrading python version is not an option / in control.
I tried reading ssl — SSL wrapper for socket objects but couldn't make sense out of it.
what i tried (in vain) :
s_ = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s = ssl.wrap_socket(s_, keyfile=clientKey, certfile=clientCert, cert_reqs=ssl.CERT_REQUIRED)
new_conn = s.connect((uri, some_port))
but returns :
SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)')
Question - how to generate SSL Context on older version so as to have a secure https connection?
You have to specify the ca_certs file (which should point to trust store)
I've got the perfect solution using the requests library. The requests library has got to be my favorite library I've ever used, cause it takes something in Python that is inherently difficult to do -- SSL and REST requests -- and makes it unbelievably simple. I checked out their version support and Python 2.6+ is supported.
Here is an example of how to use their library.
>>> requests.get(uri)
And that is all you have to do. The requests library takes care of establishing a ssl connection.
Taking this one step farther. If you need to persist cookies between requests, you can do so like this.
>>> sess = requests.Session()
>>> credentials = {"username": "user",
"password": "pass"}
>>> sess.post("https://some-website/login", params=credentials)
<Response [200]>
>>> sess.get("https://some-website/a-backend-page").text
<html> the backend page... </html>
Edit: If you need to, you can also pass in the path to the certificate and the key like so requests.get(uri, cert=('path/to/cert/abc.crt', 'path/to/key/abc.key'))
Now hopefully you can convince them to install the requests library on the production boxes, cause it would be well worth it. Let me know if this works out for you.
Related
This is probably a dumb question, but I just want to make sure with the below.
I am currently using the requests library in python. I am using this to call an external API hosted on Azure cloud.
If I use the requests library from a virtual machine, and the requests library sends to URL: https://api-management-example/run, does that mean my communication to this API, as well as the entire payload I send through is secure? I have seen in my Python site-packages in my virtual environment, there is a cacert.pem file. Do I need to update that at all? Do I need to do anything else on my end to ensure the communication is secure, or the fact that I am calling the HTTPS URL means it is secure?
Any information/guidance would be much appreciated.
Thanks,
A HTTPS is secure with valid signed certificate. Some people use self signed certificate to maintain HTTPS. In requests library, you explicitly verify your certificate. If you have self-signed HTTPS then, you need to pass the certificate to cross verify with your local certificate.
verify = True
import requests
response = requests.get("https://api-management-example/run", verify=True)
Self Signed Certificate
import requests
response = requests.get("https://api-management-example/run", verify="/path/to/local/certificate/file/")
Post requests are more secure because they can carry data in an encrypted form as a message body. Whereas GET requests append the parameters in the URL, which is also visible in the browser history, SSL/TLS and HTTPS connections encrypt the GET parameters as well. If you are not using HTTPs or SSL/TSL connections, then POST requests are the preference for security.
A dictionary object can be used to send the data, as a key-value pair, as a second parameter to the post method.
The HTTPS protocol is safe provided you have a valid SSL certificate on your API. If you want to be extra safe, you can implement end-to-end encryption/cryptography. Basically converting your so called plaintext, and converting it to scrambled text, called ciphertext.
You can explicitly enable verification in requests library:
import requests
session = requests.Session()
session.verify = True
session.post(url='https://api-management-example/run', data={'bar':'baz'})
This is enabled by default. you can also verify the certificate per request:
requests.get('https://github.com', verify='/path/to/certfile')
Or per session:
s = requests.Session()
s.verify = '/path/to/certfile'
Read the docs.
I am running a REST API (Search API) with Tweepy in Python. I worked the program at home and it's totally fine. But now I am working on this in different networks and I got the error message.
SSLError: ("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",)
My code is like this.
auth = tweepy.AppAuthHandler(consumer_key, consumer_secret)
api = tweepy.API(auth,wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
I found this post
Python Requests throwing up SSLError
and set the following code (verify = false) may be a quick solution. Does anyone know how to do it or other ways in tweepy? Thank you.
In streaming.py, adding verify = False in line# 105 did the trick for me as shown below. Though it is not advisable to use this approach as it makes the connection unsafe. Haven't been able to come up with a workaround for this yet.
stream = Stream(auth, listener, verify = False)
I ran into the same problem and unfortunately the only thing that worked was setting verify=False in auth.py in Tweepy (for me Tweepy is located in /anaconda3/lib/python3.6/site-packages/tweepy on my Mac):
resp = requests.post(self._get_oauth_url('token'),
auth=(self.consumer_key,
self.consumer_secret),
data={'grant_type': 'client_credentials'},
verify=False)
Edit:
Behind a corporate firewall, there is a certificate issue. In chrome go to settings-->advanced-->certificates and download your corporate CA certificate. Then, in Tweepy binder.py, right under session = requests.session() add
session.verify = 'path_to_corporate_certificate.cer'
First, verify if you can access twitter just using a proxy configuration. If so, you can modify this line on your code to include a proxy URL:
self.api = tweepy.API(self.auth)
Adding verify=False will ignore the validation that has to be made and all the data will be transferred in plain text without any encryption.
pip install certifi
The above installation fixes the bad handshake and ssl error.
For anybody that might stumble on this like I did, I had a similar problem because my company was using a proxy, and the SSL check failed while trying to verify the proxy's certificate.
The solution was to export the proxy's root certificate as a .pem file. Then you can add this certificate to certifi's trust store by doing:
import certifi
cafile = certifi.where()
with open(r<path to pem file>, 'rb') as infile:
customca = infile.read()
with open(cafile, 'ab') as outfile:
outfile.write(customca)
You'll have to replace <path to pem file> with the path to the exported file. This should allow requests (and tweepy) to successfully validate the certificates.
I have a server setup for testing, with a self-signed certificate, and want to be able to test towards it.
How do you ignore SSL verification in the Python 3 version of urlopen?
All information I found regarding this is regarding urllib2 or Python 2 in general.
urllib in python 3 has changed from urllib2:
Python 2, urllib2: urllib2.urlopen(url[, data[, timeout[, cafile[, capath[, cadefault[, context]]]]])
https://docs.python.org/2/library/urllib2.html#urllib2.urlopen
Python 3: urllib.request.urlopen(url[, data][, timeout])
https://docs.python.org/3.0/library/urllib.request.html?highlight=urllib#urllib.request.urlopen
So I know this can be done in Python 2 in the following way. However Python 3 urlopen is missing the context parameter.
import urllib2
import ssl
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
urllib2.urlopen("https://your-test-server.local", context=ctx)
And yes I know this is a bad idea. This is only meant for testing on a private server.
I could not find how this is supposed to be done in the Python 3 documentation, or in any other question. Even the ones explicitly mentioning Python 3, still had a solution for urllib2/Python 2.
The accepted answer just gave advise to use python 3.5+, instead of direct answer. It causes confusion.
For someone looking for a direct answer, here it is:
import ssl
import urllib.request
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
with urllib.request.urlopen(url_string, context=ctx) as f:
f.read(300)
Alternatively, if you use requests library, it has much better API:
import requests
with open(file_name, 'wb') as f:
resp = requests.get(url_string, verify=False)
f.write(resp.content)
The answer is copied from this post (thanks #falsetru): How do I disable the ssl check in python 3.x?
These two questions should be merged.
Python 3.0 to 3.3 does not have context parameter, It was added in Python 3.4. So, you can update your Python version to 3.5 to use context.
From Python, I would like to retrieve content from a web site via HTTPS with basic authentication. I need the content on disk. I am on an intranet, trusting the HTTPS server. Platform is Python 2.6.2 on Windows.
I have been playing around with urllib2, however did not succeed so far.
I have a solution running, calling wget via os.system():
wget_cmd = r'\path\to\wget.exe -q -e "https_proxy = http://fqdn.to.proxy:port" --no-check-certificate --http-user="username" --http-password="password" -O path\to\output https://fqdn.to.site/content'
I would like to get rid of the os.system(). Is that possible in Python?
Proxy and https wasn't working for a long time with urllib2. It will be fixed in the next released version of python 2.6 (v2.6.3).
In the meantime you can reimplement the correct support, that's what we did for mercurial: http://hg.intevation.org/mercurial/crew/rev/59acb9c7d90f
Try this (notice that you'll have to fill in the realm of your server also):
import urllib2
authinfo = urllib2.HTTPBasicAuthHandler()
authinfo.add_password(realm='Fill In Realm Here',
uri='https://fqdn.to.site/content',
user='username',
passwd='password')
proxy_support = urllib2.ProxyHandler({"https" : "http://fqdn.to.proxy:port"})
opener = urllib2.build_opener(proxy_support, authinfo)
fp = opener.open("https://fqdn.to.site/content")
open(r"path\to\output", "wb").write(fp.read())
You could try this too:
http://code.google.com/p/python-httpclient/
(It also supports the verification of the server certificate.)
Does urllib2 in Python 2.6.1 support proxy via https?
I've found the following at http://www.voidspace.org.uk/python/articles/urllib2.shtml:
NOTE
Currently urllib2 does not support
fetching of https locations through a
proxy. This can be a problem.
I'm trying automate login in to web site and downloading document, I have valid username/password.
proxy_info = {
'host':"axxx", # commented out the real data
'port':"1234" # commented out the real data
}
proxy_handler = urllib2.ProxyHandler(
{"http" : "http://%(host)s:%(port)s" % proxy_info})
opener = urllib2.build_opener(proxy_handler,
urllib2.HTTPHandler(debuglevel=1),urllib2.HTTPCookieProcessor())
urllib2.install_opener(opener)
fullurl = 'https://correct.url.to.login.page.com/user=a&pswd=b' # example
req1 = urllib2.Request(url=fullurl, headers=headers)
response = urllib2.urlopen(req1)
I've had it working for similar pages but not using HTTPS and I suspect it does not get through proxy - it just gets stuck in the same way as when I did not specify proxy. I need to go out through proxy.
I need to authenticate but not using basic authentication, will urllib2 figure out authentication when going via https site (I supply username/password to site via url)?
EDIT:
Nope, I tested with
proxies = {
"http" : "http://%(host)s:%(port)s" % proxy_info,
"https" : "https://%(host)s:%(port)s" % proxy_info
}
proxy_handler = urllib2.ProxyHandler(proxies)
And I get error:
urllib2.URLError: urlopen error
[Errno 8] _ssl.c:480: EOF occurred in
violation of protocol
Fixed in Python 2.6.3 and several other branches:
_bugs.python.org/issue1424152 (replace _ with http...)
http://www.python.org/download/releases/2.6.3/NEWS.txt
Issue #1424152: Fix for httplib, urllib2 to support SSL while working through
proxy. Original patch by Christopher Li, changes made by Senthil Kumaran.
I'm not sure Michael Foord's article, that you quote, is updated to Python 2.6.1 -- why not give it a try? Instead of telling ProxyHandler that the proxy is only good for http, as you're doing now, register it for https, too (of course you should format it into a variable just once before you call ProxyHandler and just repeatedly use that variable in the dict): that may or may not work, but, you're not even trying, and that's sure not to work!-)
Incase anyone else have this issue in the future I'd like to point out that it does support https proxying now, make sure the proxy supports it too or you risk running into a bug that puts the python library into an infinite loop (this happened to me).
See the unittest in the python source that is testing https proxying support for further information:
http://svn.python.org/view/python/branches/release26-maint/Lib/test/test_urllib2.py?r1=74203&r2=74202&pathrev=74203