I'm currently running into an error when attempting to connect to JIRA using Python2.7 and the JIRA REST API (http://jira-python.readthedocs.org/en/latest/).
When I execute the following:
from jira.client import JIRA
options = {
'server': 'https://jira.companyname.com'
}
jira = JIRA(options)
I get the following error message in console:
requests.exceptions.SSLError: [Errno 1] _ssl.c:507: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
Is there something that I may have missed or am doing incorrectly?
Thanks!
I know I'm late on this answer, but hopefully this helps someone down the road.
Why you shouldn't turn off verification
While turning off certificate verification is the easiest "solution", it is
not an advisable thing to do. It essentially says,
"I don't care if I trust you or not, I'm going to send you all my information anyway."
This opens you up for a Man-in-the-Middle attack.
If you're connecting to your company's Jira server and it has a
certificate for TLS/SSL, you should be verifying against that.
I'd ask your IT department where that certificate is. It's probably
in some root certificate for your company.
If you're connecting to the server in Chrome (for example)
it should show a lock in the left-hand corner of address bar if it's secured
over TLS/SSL.
You can Right-Click that lock -> Details -> View Certificate in Chrome.
Okay, so what do I do?
Provide the necessary certificate to the verify option directly.
jira-python uses Requests for HTTP stuff (See documentation).
And according to Requests documentation
you can specify a path to a certificate file in verify.
Thus, you can provide the root certificate for your company in verify like so:
jira_options = {
'server': jira_server_name,
'verify': 'path/to/company/root/certificate',
}
If you're using a Windows machine (a safe assumption?), that root
certificate is stored in the registry and the best way to get it
is using wincertstore.
I encountered a similar SSL certificate verification error and looking through the "JIRA" methods definitions, its possible to turn off the verification.
:param options: Specify the server and properties this client will use. Use a dict with any
of the following properties:
* server -- the server address and context path to use. Defaults to ``http://localhost:2990/jira``.
* rest_path -- the root REST path to use. Defaults to ``api``, where the JIRA REST resources live.
* rest_api_version -- the version of the REST resources under rest_path to use. Defaults to ``2``.
* verify -- Verify SSL certs. Defaults to ``True``.
* resilient -- If it should just retry recoverable errors. Defaults to `False`.
Try this :
from jira.client import JIRA
options = {'server': 'https://jira.companyname.com','verify':False}
jira = JIRA(options)
On Windows system please do the following:-
Go to the website using google chrome, then click on Lock button.
Now click on certificate, a new window pops up.
Next click on Certification Path, select first option from list which will be root, then select View Certificate, another window pops up.
Go to Details tab, click on Copy To File. Then click on Next, select Base-64 encoded x509.(CER) radio button, click on Next and save the .cer file locally.
Once the .cer file is obtained, add it to the python script as follows:-
jira_options = {
'server': jira_server_name,
'verify': 'path_to_directory_containing_certificate_file/certificate.cer'
}
This should work without any security warnings.
Just install python-certifi-win32 module, and this should help you get past these errors without any more hassle
Related
I am trying to have Google Cloud Platform Data Loss Prevention (DLP) client library for python working behind a SSL proxy:
https://cloud.google.com/dlp/docs/libraries#client-libraries-usage-python
I am using the code snippet from the doc:
# Import the client library
import google.cloud.dlp
import os
import subprocess
import json
import requests
import getpass
import urllib.parse
import logging
logging.basicConfig(level=logging.DEBUG)
# Instantiate a client.
dlp_client = google.cloud.dlp.DlpServiceClient()
# The string to inspect
content = 'Robert Frost'
# Construct the item to inspect.
item = {'value': content}
# The info types to search for in the content. Required.
info_types = [{'name': 'FIRST_NAME'}, {'name': 'LAST_NAME'}]
# The minimum likelihood to constitute a match. Optional.
min_likelihood = 'LIKELIHOOD_UNSPECIFIED'
# The maximum number of findings to report (0 = server maximum). Optional.
max_findings = 0
# Whether to include the matching string in the results. Optional.
include_quote = True
# Construct the configuration dictionary. Keys which are None may
# optionally be omitted entirely.
inspect_config = {
'info_types': info_types,
'min_likelihood': min_likelihood,
'include_quote': include_quote,
'limits': {'max_findings_per_request': max_findings},
}
# Convert the project id into a full resource id.
parent = dlp_client.project_path('my-project-id')
# Call the API.
response = dlp_client.inspect_content(parent, inspect_config, item)
# Print out the results.
if response.result.findings:
for finding in response.result.findings:
try:
print('Quote: {}'.format(finding.quote))
except AttributeError:
pass
print('Info type: {}'.format(finding.info_type.name))
# Convert likelihood value to string respresentation.
likelihood = (google.cloud.dlp.types.Finding.DESCRIPTOR
.fields_by_name['likelihood']
.enum_type.values_by_number[finding.likelihood]
.name)
print('Likelihood: {}'.format(likelihood))
else:
print('No findings.')
I also setup the following ENV variable:
GOOGLE_APPLICATION_CREDENTIALS
It run without issue when U am not behind a SSL proxy. When I am working behind a proxy, I am setting up the 3 ENV variables:
REQUESTS_CA_BUNDLE
HTTP_PROXY
HTTPS_PROXY
With such setup other GCP Client python libraries works fine behind a SSL proxy as for example for storage or bigquery).
For the DLP Client python lib, I am getting:
E0920 12:21:49.931000000 24852 src/core/tsi/ssl_transport_security.cc:1229] Handshake failed with fatal error SSL_ERROR_SSL: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed.
DEBUG:google.api_core.retry:Retrying due to 503 Connect Failed, sleeping 0.0s ...
E0920 12:21:50.927000000 24852 src/core/tsi/ssl_transport_security.cc:1229] Handshake failed with fatal error SSL_ERROR_SSL: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed.
DEBUG:google.api_core.retry:Retrying due to 503 Connect Failed, sleeping 0.0s ...
I didn't find in the documentation explaining if the lib works with proxy as the one GCP client lib and how to configure it to works with SSL proxy. The lib is in beta so it could be that it is not yet implemented.
It seems related to CA certificate and handshake. No issue with the same CA for BigQuery and Storage Client python lib. Any idea ?
Your proxy is performing TLS Interception. This results in the Google libraries not trusting the SSL certificate that your proxy is presenting when accessing Google API endpoints. This is a man-in-the-middle problem.
The solution is to bypass the proxy for Google APIs. In your VPC subnet where your application is running, enable Private Google Access. This requires that the default VPC routing rule still exists (or recreate it).
Private Google Access
[EDIT after comments below]
I am adding this comment to scare the beeswax out of management.
TLS Interception is so dangerous that no reasonable company would implement it if they read the following.
The scenario in this example. I am an IT person responsible for a corporate proxy. The company has implemented TLS Interception and I control the proxy. I have no access to Google Cloud resources for my company. I am very smart and I understand Google Cloud IAM and OAuth very well. I am going to hack my company because maybe I did not get a raise (invent your own reason).
I wait for one of the managers who has an organization or project owner/editor level permissions to authenticate with Google Cloud. My proxy logs the HTTPS headers, body and response for everything going to https://www.googleapis.com/oauth2/v4/token and a few more URLs.
Maybe the proxy is storing the logs on a Google Cloud Bucket or a SAN volume without solid authorization implemented. Maybe I am just a software engineer that finds the proxy log files laying about or easily accessed.
The corporate admin logs into his Google Account. I capture the returned OAuth Access Token. I can now impersonate the org admin for the next 3,600 seconds. Additionally, I capture the OAuth Refresh Token. I can now recreate OAuth Access Tokens at my will anytime I want until the Refresh Token is revoked which for most companies, they never do.
For doubters, study my Golang project which shows how to save OAuth Access Tokens and Refresh Tokens to a file for any Google Account used to authenticate. I can take this file home and be authorized without any authentication. This code will recreate the Access Token when it expires giving me almost forever access to any account these credentials are authorized for. Your internal IT resources will never know that I am doing this outside of your corporate network.
Note: Stackdriver Audit logging can capture the IP address, however, the identity will be the credentials that I stole. To hide my IP address, I would go to Starbucks or a public library a few hours drive from my home/job and do my deeds from there. Now figure out the where and the who for this hacker. This will give a forensics expert heartburn.
https://github.com/jhanley-com/google-cloud-shell-cli-go
Note: This problem is not an issue with Google OAuth or Google Cloud. This is an example of a security problem that the company has deployed (TLS Interceptions). This style of technique will work for almost all authentication systems that I know of that do not use MFA.
[END EDIT]
Summary:
Data Loss Prevention Client libray for python use gRCP.
google-cloud-dlp use gRPC while google-cloud-bigquery and
google-cloud-storage rely on the requests library for
JSON-over-HTTPS. Because it is gRPC other env variable need to be
setup:
GRPC_DEFAULT_SSL_ROOTS_FILE_PATH=path_file.pem
# for debugging
RPC_TRACE=transport_security,tsi
GRPC_VERBOSITY=DEBUG
More details and links can be found here link
This doesn't solve all the issues because it continue to fail after
the handsake (TLS proxy) as described here link. As well
explained by #John Hanley we should enable Private Google Access
instead which is the recommended and secure way. This is not yet in
place in the network zone I am using the APIs so the proxy team
added a SSL bypass and it is now working. I am waiting to have Private Google Access enbale to have a clean and secure setup to use GCP APIs.
I am trying to connect to a crate database with python
from crate import client
url = '434.342.435.2:4400' # Faked these numbers for purposes of this post
conn = client.connect(url)
It seems like I need to pass the cert_file and key_file arguments to client.connect which point to my .pem and .key files. Looking in the documentation, I cannot find any resource to create or download these files.
Any advice? Even a comment pointing me to a good resource for beginners would be appreciated.
So cert and key files are part of the TLS encryption of a HTTP(S) connection that are required if you use a self-signed certificate :)
This seems to be a very good explanation of the file types
As mfussenegger explained in the comment, these files are optional and only required if your CrateDB instance is "hidden" behind a reverse proxy server like NGINX or Apache with a self-signed certificate.
A small green lock on the far left of your browser's address bar indicates HTTPS (and therefore TLS) with known certificates.
Typically certificates signed by an unknown CA - like yourself - result in a warning page and a red indicator:
Since you are also referring to username and password, they usually indicate some sort of auth (maybe basic auth) which is not yet supported by crate-python :(
I'm using Python 2.7.5 (not 3.X) and I need to verify a FTPS (FTP-TLS) public certificate. That is, I want to verify it against the standard certificate authority, not a custom key. (Similar to HTTPS.)
I see some options but I cannot get them to work:
The FTP_TLS() class doesn't seem to offer the ability to verify certificates, unless I'm mistaken:
class ftplib.FTP_TLS([host[, user[, passwd[, acct[, keyfile[, certfile[, timeout]]]]]]])
I've read into the certifi and also M2Crypto, but while I can connect and transfer using FTP/TLS, I can't seem to find a way to verify the certificate.
Also, I don't think I will be able to use the CURL libraries in this case :( Just a note.
Let's try to make it into a possible answer: http://heikkitoivonen.net/blog/2008/10/14/ssl-in-python-26
The resource referenced by mcepl is no longer available over http, but only using https.
https://heikkitoivonen.net/blog/2008/10/14/ssl-in-python-26
So much for 301 redirects.
I want to use Python Requests to get the contents of internal company web page (say, https://internal.com). I can see this page in the browser, and I can "view the certificate."
So now I want to get the web page with Requests, so I do:
import requests
requests.get('https://internal.com')
But then I get an SSLError:
SSLError: [Errno 1] _ssl.c:504: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
So I guess I need to specify a cert file:
requests.get('https://example.com', cert=('/path/server.crt', '/path/key'))
But how do I find the path to the cert file? Can I get this info from Chrome or IE when viewing the web page? Or am I missing something even more basic?
The cert parameter is for client-side authentication. If you wanted to prove your identity to the server. If this was the problem you would get an error on the server.
What you need is server-side authentication. The server has to prove it's identity.
As your are connecting to an internal server requests doesn't have this server certificate in it's supplied bundle and therefore can't confirm the servers identity.
You have to supply requests with your internal CA-bundle.
To do this you have to extract it from your browser first.
From the docs:
You can also pass "verify" the path to a "CA_BUNDLE" file for private certs.
You can also set the "REQUESTS_CA_BUNDLE" environment variable.
Chrome (short version):
Put this in your URL-bar chrome://settings/certificates
Choose tab "Authorities"
Find your internal CA and click export
Best format is "Base64 encoded certificate chain"
save to a location where you will find it again
now you can use `request.get(url, verify=)
You can also visit the certificate manager by:
(Steps for chrome, quite similar for other browsers)
Go to settings
Click "Show advanced settings" at the bottom
HTTPS/SSL -> "Manage Certificates"
See above
Make sure when you export the crt, to select in the file type save as dropdown "export with chain" - so that it will have all three certs in one. That was my issue.
Today I faced one interesting issue.
I'm using the foursquare recommended python library httplib2 raise
SSLHandshakeError(SSLError(1, '_ssl.c:504: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed'),)
while trying to request an oauth token
response, body = h.request(url, method, headers=headers, body=data)
in
_process_request_with_httplib2 function
does anyone know why this happens?
If you know that the site you're trying to get is a "good guy", you can try creating your "opener" like this:
import httplib2
if __name__ == "__main__":
h = httplib2.Http(".cache", disable_ssl_certificate_validation=True)
resp, content = h.request("https://site/whose/certificate/is/bad/", "GET")
(the interesting part is disable_ssl_certificate_validation=True )
From the docs:
http://bitworking.org/projects/httplib2/doc/html/libhttplib2.html#httplib2.Http
EDIT 01:
Since your question was actually why does this happen, you can check this or this.
EDIT 02:
Seeing how this answer has been visited by more people than I expected, I'd like to explain a bit when disabling certificate validation could be useful.
First, a bit of light background on how these certificates work. There's quite a lot of information in the links provided above, but here it goes, anyway.
The SSL certificates need to be verified by a well known (at least, well known to your browser) Certificate Authority. You usually buy the whole certificate from one of those authorities (Symantec, GoDaddy...)
Broadly speaking, the idea is: Those Certificate Authorities (CA) give you a certificate that also contains the CA information in it. Your browsers have a list of well known CAs, so when your browser receives a certificate, it will do something like: "HmmmMMMmmm.... [the browser makes a supiciuous face here] ... I received a certificate, and it says it's verified by Symantec. Do I know that "Symantec" guy? [the browser then goes to its list of well known CAs and checks for Symantec] Oh, yeah! I do. Ok, the certificate is good!
You can see that information yourself if you click on the little lock by the URL in your browser:
However, there are cases in which you just want to test the HTTPS, and you create your own Certificate Authority using a couple of command line tools and you use that "custom" CA to sign a "custom" certificate that you just generated as well, right? In that case, your browser (which, by the way, in the question is httplib2.Http) is not going to have your "custom" CA among the list of trusted CAs, so it's going to say that the certificate is invalid. The information is still going to travel encrypted, but what the browser is telling you is that it doesn't fully trust that is traveling encrypted to the place you are supposing it's going.
For instance, let's say you created a set of custom keys and CAs and all the mambo-jumbo following this tutorial for your localhost FQDN and that your CA certificate file is located in the current directory. You could very well have a server running on https://localhost:4443 using your custom certificates and whatnot. Now, your CA certificate file is located in the current directory, in the file ./ca.crt (in the same directory your Python script is going to be running in). You could use httplib2 like this:
h = httplib2.Http(ca_certs='./ca.crt')
response, body = h.request('https://localhost:4443')
print(response)
print(body)
... and you wouldn't see the warning anymore. Why? Because you told httplib2 to go look for the CA's certificate to ./ca.crt)
However, since Chrome (to cite a browser) doesn't know about this CA's certificate, it will consider it invalid:
Also, certificates expire. There's a chance you are working in a company which uses an internal site with SSL encryption. It works ok for a year, and then your browser starts complaining. You go to the person that is in charge of the security, and ask "Yo!! I get this warning here! What's happening?" And the answer could very well be "Oh boy!! I forgot to renew the certificate! It's ok, just accept it from now, until I fix that." (true story, although there were swearwords in the answer I received :-D )
Recent versions of httplib2 is defaulting to its own certificate store.
# Default CA certificates file bundled with httplib2.
CA_CERTS = os.path.join(
os.path.dirname(os.path.abspath(__file__ )), "cacerts.txt")
In case if you're using ubuntu/debian, you can explicitly pass the path to system certificate file like
httplib2.HTTPSConnectionWithTimeout(HOST, ca_certs="/etc/ssl/certs/ca-certificates.crt")
Maybe this could be the case:
I got the same problem and debugging the Google Lib I found out that the reason was that I was using an older version of httplib2(0.9.2). When I updated to the most recent (0.14.0) it worked.
If you already install the most recent, make sure that some lib is not installing an older version of httplib2 inside its dependencies.
When you see this error with a self-signed certificate, as often happens inside a corporate proxy, you can point httplib2 to your custom certificate bundle using an environment variable. When, for example, you don't want to (or can't) modify the code to pass the ca_certs parameter.
You can also do this when you don't want to modify the system certificate store to append your CA cert.
export HTTPLIB2_CA_CERTS="\path\to\your\CA_certs_bundle"