I'm trying to make a https request from within a docker container. Here's python code that runs fine on my Windows 10 host:
import certifi
import ssl
import urllib.request
tmp_filename = "penguin.jpg"
pingu_link = "https://i.pinimg.com/originals/cc/3a/1a/cc3a1ae4beafdd5ac2293824f1fb0437.jpg"
print(certifi.where())
default = ssl.create_default_context()
https_handler = urllib.request.HTTPSHandler(context=ssl.create_default_context())
opener = urllib.request.build_opener(https_handler)
# add user agent headers to avoid 403 response
opener.addheaders = [
(
"User-agent",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0",
)]
urllib.request.install_opener(opener)
r = urllib.request.urlretrieve(pingu_link, tmp_filename)
If I understand correctly, certifi comes with its own set of CA certificates, which are contained in the .pem file you can find by calling certifi.where(). However, if I convert this file to .crt and tell request to use it by calling
https_handler = urllib.request.HTTPSHandler(context=ssl.create_default_context(cafile="cacert.crt"))
the verification fails: ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129). As this post explains, certifi also automatically imports certfiles from windows cert store.
Now I'm a little confused what this means if you want to verify SSL certificates in a docker container. It seems like there are two options:
Just install the ca-certificates package. It should provide the necessary public keys for most CAs.
Install your own (possibly self-signed) certificate: copy it into your docker container and tell the ca-certificates package about it by calling update-ca-certificates. (You could also install it in windows global certificate store and it should work with docker out of the box according to this issue on github)
Unfortunately, the first approach does not seem to work for me. It raises the same Verification Error as above. Even worse, since I don't know which .crt file is used to verify certificates without docker, the second option is not a possibility either. Here's the Dockerfile:
# start with a python env that already has urllib3 installed
FROM company_harbor/base_py3_container
ENV HTTP_PROXY="my_company_proxy"
ENV HTTPS_PROXY="my_company_proxy"
# install ca certificates
RUN apt-get update && \
apt-get install ca-certificates -y && \
apt-get clean
RUN pip install --upgrade certifi --trusted-host=pypi.org --trusted-host=files.pythonhosted.org
# what I would do if I found the right .crt file
# COPY cacert.crt /usr/share/ca-certificates/cacert.crt
# RUN chmod 644 /usr/share/ca-certificates/cacert.crt
RUN update-ca-certificates
COPY ./download_penguin.py ./download_penguin.py
CMD [ "python", "download_penguin.py" ]
What do you need to do in order to verify SSL certificates with python in docker?
Turns out company proxies can swap SSL certificates in a Man-in-the-middle manner.
The standard certificates from apt-get install ca-certificates or python's certifi package are not going to include these company certificates. Additionally, this is not specifically a Docker related question but a question of "How to install a root certificate on Linux". Debian to be more precise, because thats what Docker containers run by default.
This was not as straight-forward as expected. Here's what worked in the end:
Use the company's certificates in .pem format to begin with.
Rename them so they end with .crt. Do NOT use any openssl .pem to .crt transformation. In my case, every .crt file I found online was encoded in a way that made it unreadable for Notepad++, Vim and alike. .pem files on the other hand looked fine.
Copy the renamed certificates to the proper ca-certificate location on your OS.
Install the certificates via update-ca-certificates.
Translated into a Dockerfile, here's the important part:
COPY root.pem /usr/local/share/ca-certificates/root.crt
COPY proxy.pem /usr/local/share/ca-certificates/proxy.crt
RUN update-ca-certificates
Related
Looking for a quick way to serve an API over HTTPS for testing purposes. The API app is created using flask and being served on port 443 using gunicorn.
gunicorn --certfile=server.crt --keyfile=server.key --bind 0.0.0.0:443 wsgi:app
When my React app (served over HTTPS) sends a POST request to one of the routes via HTTPS, the browser console is showing
POST https://1.2.3.4/foo net::ERR_CERT_AUTHORITY_INVALID
My key and certs are created using
openssl genrsa -aes128 -out server.key 2048
openssl rsa -in server.key -out server.key
openssl req -new -days 365 -key server.key -out server.csr
openssl x509 -in server.csr -out server.crt -req -signkey server.key -days 365
Is there a solution to solve ERR_CERT_AUTHORITY_INVALID raised by the browser, without using a reverse proxy like nginx/caddy? And without each user having to manually trust the self-signed cert?
Your browser/computer/device need to trust the certificate presented by gunicorn...
You should add the hostname of your PC in the certificate (Common name or Subject Alternative Name) and add the Certificate to your Trusted List of Certificates
i ran into a similar problem recently on firefox creating the cert using open ssl.
i opted for an alternative solution using mkcert
sudo apt install libnss3-tools
sudo apt install mkcert
wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-amd64
sudo cp mkcert-v1.4.4-linux-amd64 /usr/local/bin/mkcert
sudo chmod +x /usr/local/bin/mkcert
mkcert -install
mkcert test.example.com '*.test.example.com' localhost 127.0.0.1 ::1
you'll want to modify /etc/hosts to include test.example.com
127.0.0.1 localhost test.example.com
don't forget to logout and log back in to update changes in hosts
if firefox still complains go to settings -> privacy/security and open View Certificates.
under the server tab, add an exception for https://test.example.com:(port #) and select Get Certificate.
then Confirm Security Exception
now fire up gunicorn using the pem format files generated by mkcert.
in my case it was something like...
gunicorn --certfile test.example.com+4.pem --keyfile test.example.com+4-key.pem
your cert should be accepted now.
each member of our team has to set this up locally. (specifically, we use an installer script to build the dev project, but the dev is responsible for installing the cert on the browser of their choosing.)
for us it was a small inconvenience for the payoff.
if this doesn't suit your needs then unfortunately yes, you might have to opt for an alternative such as caddy or nginx to reverse-proxy your requests. but you'd still have to supply a certificate using some version of the example above or via tools like certbot ect
i'd recommend a pre-config'd docker container, or a custom installer script if you're working on a team based project.
I am able to connect to a certain URL with cURL, after I installed the corresponding SSL certificates:
$ export MY_URL=https://www.infosubvenciones.es/bdnstrans/GE/es/convocatoria/616783
$ curl -vvvv $MY_URL # Fails
$ sudo openssl x509 -inform pem -outform pem -in /tmp/custom-cert.pem -out /usr/local/share/ca-certificates/custom-cert.crt
$ sudo update-ca-certificates
$ curl -vvvv $MY_URL # OK
However, requests (or httpx, or any other library I use) refuses to do so:
In [1]: import os
...: import requests
...: requests.get(os.environ["MY_URL"])
---------------------------------------------------------------------------
SSLCertVerificationError Traceback (most recent call last)
...
SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)
My understanding is that requests uses certifi and as such these custom certificates are not available here:
In [1]: import certifi
In [2]: certifi.where()
Out[2]: '/tmp/test_ca/.venv/lib/python3.10/site-packages/certifi/cacert.pem'
I have already tried a number of things, like trying to use the system CA bundle:
export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt same error
requests.get(..., verify="/etc/ssl/certs/ca-certificates.crt") same error
switched to httpx + a custom SSL context as explained in the docs, same error
attempted truststore as discussed in this httpx issue, same error
How can I make Python (requests, httpx, raw ssl, anything) use the same certificates that cURL is successfully using?
The only thing that worked so far, inspired by this hackish SO answer, is to do verify=False. But I don't want to do that.
In [9]: requests.get(
...: my_url,
...: verify=False,
...: )
/tmp/test_ca/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py:1043: InsecureRequestWarning: Unverified HTTPS request is being made to host 'xxx'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
i tried your stuff on my system (Manjaro Linux, python 3.10) i can make a connection. I downloaded the complete certificate chain from the website (with my browser). After that i can use it with:
r = requests.get(url=URL, verify=<path to pem file>)
and with
export REQUESTS_CA_BUNDLE=<path to pem>
r = requests.get(url=URL)
I tried the export within pyCharm.
So the python stuff is working and you may have a problem in your certificates. Without this stuff i get the ssl error (of course), because python does not use the system certs as you mentioned correct. In my pem-file i have 3 certificates. Maybe you have only 1 and the others are in the global store, so that curl does not need the complete chain, instead of python. You should try to download the complete chain with your browser and try again.
Studying https://pip.pypa.io/en/stable/topics/configuration/ I understand that I can have multiple pip.conf files (on a UNIX-based system) which are loaded in the described order.
My task is to write a bash script that automatically creates a virtual environment and sets pip configuration only for the virtual environment.
# my_bash_script.sh
...
python -m virtualenv .myvenv
....
touch pip.conf
# this will create path/to/.myvenv/pip.conf
# otherwise following commands will be in the user's pip.conf at ~/.config/pip/pip.conf
path/to/.myvenv/bin/python -m pip config set global.proxy "my-company-proxy.com"
# setting our company proxy here
path/to/.myvenv/bin/python -m pip config set global.trusted-host "pypi.org pypi.python.org files.pythonhosted.org"
# because of SSL issues from behind the company's firewall I need this to make pip work
...
My problem is, that I want to set the configuration not for global but for site. If I exchange global.proxy and global.trusted-host for site.proxy and site.trusted-host pip won't be able to install packages anymore whereas everything works fine if I leave it at global. Also changing it to install.proxy and install.trusted-host doesn't work.
The pip.conf file looks like this afterwards:
# /path/to/.myvenv/pip.conf
[global]
proxy = "my-company-proxy.com"
trusted-host = "pypi.org pypi.python.org files.pythonhosted.org"
pip config debug yields the following:
env_var:
env:
global:
/etc/xdg/pip/pip.conf, exists: False
/etc/pip.conf, exists: False
site:
/path/to/.myvenv/pip.conf, exists: True
global.proxy: my-company-proxy.com
global.trusted-host: pypi.org pypi.python.org files.pythonhosted.org
user:
/path/to/myuser/.pip/pip.conf, exists: False
/path/to/myuser/.config/pip/pip.conf, exists: True
What am I missing here?
Thank you in advance for your help!
The [global] in the config file refers to the fact that these settings are used for all pip commands. See this section of the manual. So you can do something like
[global]
timeout = 60
[freeze]
timeout = 10
The global/site distinction comes from the location of the config file. So your file /path/to/.myvenv/pip.conf is referred to as the site config file through its location. In it, you still need to have
[global]
proxy = "my-company-proxy.com"
trusted-host = "pypi.org pypi.python.org files.pythonhosted.org"
Trying to connect to Azure CosmosDB mongo server results into an SSL handshake error.
I am using Python3 and Pymongo to connect to my Azure CosmosDB. The connection works fine if I run the code with Python27 but causes the below error when using Python3:
import pymongo
from pymongo import MongoClient
import json
import sys
def check_server_status(client, data):
'''check the server status of the connected endpoint'''
db = client.result_DB
server_status = db.command('serverStatus')
print('Database server status:')
print(json.dumps(server_status, sort_keys=False, indent=2, separators=(',', ': ')))
coll = db.file_result
print (coll)
coll.insert_one(data)
def main():
uri = "mongodb://KEY123#backend.documents.azure.com:10255/?ssl=true&replicaSet=globaldb"
client = pymongo.MongoClient(uri)
emp_rec1 = {
"name":"Mr.Geek",
"eid":24,
"location":"delhi"
}
check_server_status(client, emp_rec1)
if __name__ == "__main__":
main()
Running this on Python3 results into below error:
pymongo.errors.ServerSelectionTimeoutError: SSL handshake failed:
backendstore.documents.azure.com:10255: [SSL:
CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)
Here is my successful output when I run the same code with Python27:
Database server status: { "_t": "OKMongoResponse", "ok": 1 }
Collection(Database(MongoClient(host=['backend.documents.azure.com:10255'],
document_class=dict, tz_aware=False, connect=True, ssl=True,
replicaset='globaldb'), u'result_DB'), u'file_result')
On Windows you can do like this
pip install certifi
Then use it in code:
import certifi
ca = certifi.where()
client = pymongo.MongoClient(
"mongodb+srv://username:password#cluster0.xxxxx.mongodb.net/xyzdb?retryWrites=true&w=majority", tlsCAFile=ca)
Solved the problem with this change:
client = pymongo.MongoClient(uri, ssl_cert_reqs=ssl.CERT_NONE)
The section Troubleshooting TLS Errors of the PyMongo offical document `TLS/SSL and PyMongo introduces the issue as below.
TLS errors often fall into two categories, certificate verification failure or protocol version mismatch. An error message similar to the following means that OpenSSL was not able to verify the server’s certificate:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
This often occurs because OpenSSL does not have access to the system’s root certificates or the certificates are out of date. Linux users should ensure that they have the latest root certificate updates installed from their Linux vendor. macOS users using Python 3.6.0 or newer downloaded from python.org may have to run a script included with python to install root certificates:
open "/Applications/Python <YOUR PYTHON VERSION>/Install Certificates.command"
Users of older PyPy and PyPy3 portable versions may have to set an environment variable to tell OpenSSL where to find root certificates. This is easily done using the certifi module from pypi:
$ pypy -m pip install certifi
$ export SSL_CERT_FILE=$(pypy -c "import certifi; print(certifi.where())")
You can try to follow the description above to fix your issue, which seems to be for Linux and Mac Users. On Windows, I can not reproduce your issue in Python 3.7 and 3.6. If you have any concern, please feel free to let me know.
Faced the same issue when trying to connect mongodb from Digital Ocean,
Solved by using this function with params in MongoClient:
def get_client(host,port,username,password,db):
return MongoClient('mongodb://{}:{}/'.format(host,port),
username=username,
password=password,
authSource=db,
ssl=True,ssl_cert_reqs=ssl.CERT_NONE)
client = get_client("host-ip","port","username","password","db-name")
On Mac Mojave 10.14.6 , I used (PyMongo 3.10 and python 3.7), to solve:
flask pymongo pymongo.errors.ServerSelectionTimeoutError [SSL: CERTIFICATE_VERIFY_FAILED]
Execute in terminal:
sudo /Applications/Python\ 3.7/Install\ Certificates.command
If you use other python version, only change versión number (In my case, i have Python 3.7)
cluster = MongoClient(
"url",
ssl=True,
ssl_cert_reqs=ssl.CERT_NONE,
)
By default pymongo relies on the operating system’s root certificates.
It could be that Atlas itself updated its certificates or it could be that something on your OS changed. “certificate verify failed” often occurs because OpenSSL does not have access to the system’s root certificates or the certificates are out of date. For how to troubleshoot see TLS/SSL and PyMongo — PyMongo 3.12.0 documentation 107.
pls Try :
client = pymongo.MongoClient(connection, tlsCAFile=certifi.where())
and dont forget to install certifi
On mac Monterey, I used pymongo 3.12.1 and virtual environment
To solve, use
ssl_cert_reqs=CERT_NONE
with mongodb url
I've done alot of research, and I can't find anything which actually solves my issue.
Since basically no site accepts mitmdumps certificate for https, I want to ignore those hosts. I can access a specific website with "--ignore-hosts (ip)" like normal, but I need to ignore all HTTPS/SSL hosts.
Is there any way I can do this at all?
Thanks alot!
There is a script file called tls_passthrough.py on the mitmproxy GitHub which ignores hosts which has previously failed a handshake due to the user not trusting the new certificate. Although it does not save for other sessions.
What this also means is that the first SSL connection from this perticular host the will always fail. What I suggest you do is write out all the IPs which has failed previously into a text document and ignore all hosts which are in that text file.
tls_passthrough.py
To simply start it, you just add it with the script argument "-s (tls_passthrough.py path)"
Example,
mitmproxy -s tls_passthrough.py
you need a simple addon script to ignore all tls connections.
import mitmproxy
class IgnoreAllTLS:
def __init__(self) -> None:
pass
def tls_clienthello(self, data: mitmproxy.proxy.layers.tls.ClientHelloData):
'''
ignore all tls event
'''
# LOGC("tls hello from "+str(data.context.server)+" ,ignore_connection="+str(data.ignore_connection))
data.ignore_connection = True
addons = [
IgnoreAllTLS()
]
the latest version ( 7.0.4 for now) is not support ignore_connection feature yet,so u need to install the main source version:
git clone https://github.com/mitmproxy/mitmproxy.git
cd mitmproxy
python3 -m venv venv
activate the venv before startup the proxy
source /path/to/mitmproxy/venv/bin/activate
startup mitmproxy
mitmproxy -s ignore_all_tls.py
You can ignore all https/SSL traffic by using a wildcard:
mitmproxy --ignore-hosts '.*'