I am trying to setup TLS encrypted connections to MongoDB database using PyMongo. I have 2 python binaries installation at 2 different locations. But, both have version: 3.6.8. For both of them I have installed PyMongo version: 4.1.1. Have completed the process for generating CA keys and server private keys. I then added the ca.pem to '/etc/pki/ca-trust/source/anchors/' and ran 'sudo update-ca-trust' to add the certificate authority in the operating system certificate store. Then, updated the mongod.conf file and restarted the mongod instance. I am able to connect to the mongo shell using this command
mongo --tls --host='server-host-name'
But, the main issue is I am able to connect to the database using one python package, but the other gives this error:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)
error=AutoReconnect('SSL handshake failed:....)]
The output of the below command is:
openssl version -d
OPENSSLDIR: "/etc/pki/tls"
One workaround to make the other python binary also work was to explicitly export the path in the environment variable
export SSL_CERT_FILE=/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
But, I want the other python binary to also look for the CAs in the appropriate directory automatically.
All these tests are performed locally and not through some remote connections (which would require the certificate paths to be specified explicitly). I wanted to know the internal working of pymongo.MongoClient specifically for TLS connections in detail. Basically, I wanted some understanding how does it fetch the CAFiles from the operating system certificate store.
Also, how do I increase the logging for pymongo, any workaround for this? Can someone help me debug this? I can add additional information if required. Thank you.
Related
I am trying to retrieve certificate data using Python 3.11.0 on Windows using the ssl.create_default_context command.
While it does retrieve some certificates, it doesn't retrieve all of them. For example, I added a new Go-Daddy ssl certificate to the Certificate Snap-In of the MMC, specifically to the "Trusted Root Certificate Authority" section, since I saw the ssl.create_default_context command pulls existing certificates from there.
After adding said certificate and running the following lines of code:
certs = ssl.create_default_context().get_ca_certs(binary_form=True)
pem_certs = [ssl.DER_cert_to_PEM_cert(der) for der in certs]
I'm getting a list of certificates, but it doesn't include the new certificate I manually added.
I tried to read the command configuration but didn't find any flag that could help me retrieve any extra certificates with this command.
Any help would be greatly appreciated, thanks!
I am working on Python script on Windows 10 to connect to consume KAFKA topic. The SSL certificate is installed on Windows server in .jks format. The SSL connection to KAFKA is possible only with his certificate.
I wanted to know if there is a way I can tell Python to get the default certificate from the specific location? Will Python accept .jks format certificate? If not then what options I have.
Python isn't Java. JKS files really only work within the context of a JVM
You can use keytool commands to export a PEM certificate from a JKS file to be used for non Java purposes
How to convert trust certificate from .jks to .pem?
I have been trying to connect to a RabbitMQ (it was created from AWS Messaging Service if it matters) instance via celery 5.0.5. The connection link starts as follows amqps://user:password#..../. I receive the following error when running my python script:
consumer: Cannot connect to amqps://sessionstackadmin:**#b-0482d011-0cca-40bd-968e-c19d6c85e2a9.mq.eu-central-1.amazonaws.com:5671//: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
I am running the script from a docker container with python 3.6.12. The docker container has access to the endpoint (at least it can telnet to it). I have the feeling that the python process does not respect the distro certificate chain and it just fails verifying the certificate.
I solved it! Celery is using Kombu, which is using py-amqp and it happens that the latest version 5.0.3 from Jan 19 is broken.
My GH ticket https://github.com/celery/py-amqp/issues/349
Solution: add amqp==5.0.2 as a hard dependency in your project requirements.
Fix at: git+git://github.com/celery/py-amqp.git#0b8a832d32179d33152d886acd6f081f25ea4bf2
I am leaving the workaround that "fixes" this. For some reason the kombu library when trying to handle ssl connections does not respect the default CA certs coming with your distribution. This is basically handled by https://docs.python.org/3/library/ssl.html#ssl.create_default_context which the library does not use. Sadly it does not allow to pass in a custom SSLContext but only a set of options that will be later passed down to the context. One such options is broker_use_ssl. By settings it to {'ca_certs': '/etc/ssl/certs/ca-certificates.crt'} it will respect the CA certs from the distribution (keep in mind that I am using an ubuntu/debian based image and the CA certs configuration file resides there, if you are using another distro check out the proper location for your CA certs).
I'm using Python on Centos 7 and I have installed GSK8Kit with DB2 11.3 client.
So I set:
IBM_DB_HOME=/path/to/my/db2client/sqllib - ODBC and clidriver
Also I set:
LD_LIBRARY_PATH = $IBM_DB_HOME/lib:$LD_LIBRARY_PATH
Then I installed ibm_db:
pip install ibm_db
I added my db2servercert.arm into mykeydb.kdb file, located /opt/IBM/db2/GSK8KitStore and I'm using the same version of GSK8Kit on client and server.
gsk8capicmd_64 -cert -add -db mykeydb.kdb -stashed -label "DB2 Server
self-signed certificate" -file db2servercert.arm -format ascii -trust enable
According to this IBM docs: https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.admin.sec.doc/doc/t0053518.html
From Db2 V10.5 FP5 onwards, the SSLClientKeystoredb and SSLClientKeystash keywords are not needed in the connection string, db2cli.ini file, FileDSN, or db2dsdriver.cfg file. If you have not set or passed values for the SSLClientKeystoreddb and SSLClientKeystash keywords, the CLI/ODBC client driver will create a default key database internally during the first SSL connection. The Client driver will call GSKit API's to create a key database populated with the default root certificates.
Now I'm trying to create ibm_db connection string for db2 SSL connection using various scenarios:
Security=ssl and SSLServerCertificate=/path/to/my/db2servercert.arm "Database=sampledb;Protocol=tcpip;Hostname=myhost;Servicename=50001;Security=ssl;SSLServerCertificate=/path/to/my/db2servercert.arm;"
SECURITY=SSL and SSLClientKeystoredb=/opt/IBM/db2/GSK8KitStore/mykeydb.kdb and SSLClientKeystash=/opt/IBM/db2/GSK8KitStore/mystashfile.sth
"Database=sampledb;Protocol=tcpip;Hostname=myhost;Servicename=50001;Security=ssl;SSLClientKeystoredb=/opt/IBM/db2/GSK8KitStore/mykeydb.kdb;SSLClientKeystash=/opt/IBM/db2/GSK8KitStore/mystashfile.sth;"
Security=ssl
"Database=sampledb;Protocol=tcpip;Hostname=myhost;Servicename=50001;Security=ssl;"
In 1) and 2) I was able to connect without any SSL error connections, but in 3) I'm getting Socket 414 error:
[IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: "SSL".
Communication API being used: "SOCKETS". Location where the error was detected: "".
Communication function detecting the error: "sqlccSSLSocketSetup". Protocol specific error code(s): "414", "", "". SQLSTATE=08001
That means:
https://www.ibm.com/support/knowledgecenter/en/SSAL2T_7.1.0/com.ibm.cics.tx.doc/reference/r_gskit_error_codes.html,
414 error: GSK_ERROR_BAD_CERT - Incorrectly formatted certificate received from partner.
Note: on another machine with the same config and ibm_db installed this connection string works (I'm sure I missed smth)
"Database=sampledb;Protocol=tcpip;Hostname=myhost;Servicename=50001;Security=ssl;"
My questions are:
Which env variables or db2 client parameters I have to configure to connect only with Security=ssl property?
How does ibm_db work under the hood, when trying to connect to db2 remote server and where I can find this root certificate based on which it automatically generate its own keydb.kdb file as mentioned in IBM docs?
Thx for any idea ;)
If you're using a self-signed SSL certificate, you can't connect without using options 1 or 2.
In option 1 you're supplying the certificate's public key directly, to allow the Db2 client to validate the Db2 server. This is already using the "in memory keystore" that you're asking about in question #2.
In option 2, you would have imported the same public key into your keystore to allow the Db2 client to validate the server.
If you want to connect using only Security=SSL, your Db2 server's SSL certificate needs to come from one of the CAs already in the system keystore.
I believe that when the Db2-documentation writes "The Client driver will call GSKit API's to create a key database populated with the default root certificates", it means that the dynamically created kdb will contain the certs for some common commercial CAs, and (if specified) will also contain the cert specified by SSLServerCertificate.
As you are using a self-signed certificate, the CA certs will be ignored in this case.
If you are connecting to a Db2-server that runs on Linux/Unix/Windows, using IBM's drivers, and want an encrypted connection that uses the target Db2-instance public-key as part of the encryption, then you must tell the Db2-client the location of that certificate (which contains the Db2-instance public key) in one way or another.
For a linux client, thay cert will either be in a statically created kdb (via GSKit commands), or in a dynamically created kdb as specified by using the SSLServerCertificate property. For a Db2-client running on Microsoft Windows the certificate can additionally be fetched from the MS keystore if Db2-client is configured to use that.
The source code for ibm_db module is available on github. However, the client-side SSL work happens not in ibm_db module but instead happens in the (closed source) Db2-driver along with (closed source) libraries for GSKit. To see some of what's happening under the covers you can trace the CLI driver. Refer to the Db2-documentation online for details of CLI tracing.
I changed my Webserver from HTTP to HTTPS with "Let"s Encrypt".
The Webserver contains an API, and I have an Python application, which uses the API.
Under Linux is all fine, but under Windows I receive this below, when I'm logging in.
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
My thought was, that the SSL certificate isn't installed.
So I downloaded the "isrgrootx1.der" and "lets-encrypt-x1-cross-signed.der" renamed both to the ending "*.cer".
Then I opened the Windows console, and run this:
certutil -addstore "Root" "isrgrootx1.cer".
certutil -addstore "Root" "lets-encrypt-x1-cross-signed.cer".
The second command failed, because it isn't a root certificate.
My question is: In which group has the "lets-encrypt-x1-cross-signed.cer" to be installed?
You shouldn't need to add "lets-encrypt-x1-cross-signed.cer" to your Windows machine, since it's only an intermediate certificate. And you shouldn't need to add "isrgrootx1.cer" either, since Let's Encrypt certificates chain to "DST Root X3", which is already included with Windows.
Most likely your web server was not configured to send the intermediate certificate. If you're using Certbot, for instance, you'll want to configure your web server using "fullchain.pem" rather than "cert.pem".