Python celery connect through ssl - python

I have been trying to connect to a RabbitMQ (it was created from AWS Messaging Service if it matters) instance via celery 5.0.5. The connection link starts as follows amqps://user:password#..../. I receive the following error when running my python script:
consumer: Cannot connect to amqps://sessionstackadmin:**#b-0482d011-0cca-40bd-968e-c19d6c85e2a9.mq.eu-central-1.amazonaws.com:5671//: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
I am running the script from a docker container with python 3.6.12. The docker container has access to the endpoint (at least it can telnet to it). I have the feeling that the python process does not respect the distro certificate chain and it just fails verifying the certificate.

I solved it! Celery is using Kombu, which is using py-amqp and it happens that the latest version 5.0.3 from Jan 19 is broken.
My GH ticket https://github.com/celery/py-amqp/issues/349
Solution: add amqp==5.0.2 as a hard dependency in your project requirements.
Fix at: git+git://github.com/celery/py-amqp.git#0b8a832d32179d33152d886acd6f081f25ea4bf2

I am leaving the workaround that "fixes" this. For some reason the kombu library when trying to handle ssl connections does not respect the default CA certs coming with your distribution. This is basically handled by https://docs.python.org/3/library/ssl.html#ssl.create_default_context which the library does not use. Sadly it does not allow to pass in a custom SSLContext but only a set of options that will be later passed down to the context. One such options is broker_use_ssl. By settings it to {'ca_certs': '/etc/ssl/certs/ca-certificates.crt'} it will respect the CA certs from the distribution (keep in mind that I am using an ubuntu/debian based image and the CA certs configuration file resides there, if you are using another distro check out the proper location for your CA certs).

Related

Connecting to Elasticsearch via python

I am running elasticsearch-8.6.1 with default settings on an Azure VM, with port 5601 open. This is a dev server with only one cluster. I am able to start Elasticsearch, Kibana and Logstash services and view them via a browser.
I have a some python code which is trying to connect to ElasticSearch using the recommended route of verifying https through the ca_certification route as per https://www.elastic.co/guide/en/elasticsearch/client/python-api/master/connecting.html
I have copied the http_ca.crt file from the VM onto my local machine and made it accessible.
es = Elasticsearch('https://localhost:9200',
ca_certs=CA_CERT,
basic_auth=(USER_ID,ELASTIC_PASSWORD))
Elasticsearch.yml has the following enabled
network.host: 0.0.0.0
http.host: 0.0.0.0
xpack.security.enabled: true
I appreciate that I can turn off security, but this isn't a sustainable approach moving forward.
The error I am getting is
elastic_transport.ConnectionError: Connection error caused by:
ConnectionError(Connection error caused by:
NewConnectionError(<urllib3.connection.HTTPSConnection object at
0x000001890CEF3730>: Failed to establish a new connection: [WinError
10061] No connection could be made because the target machine actively
refused it))
I suspect there is some configuration setting that I am missing somewhere.
Thanks in advance for any advise or pointers that can be offered.
The error message suggests that the Python code is unable to establish a connection to Elasticsearch on the specified host and port. There could be several reasons for this, including network configuration issues or problems with SSL/TLS certificates.
Here are some things you could try to troubleshoot the issue:
Check that Elasticsearch is running and listening on the correct host and port. You can use the curl command to test the connection:
curl -k https://localhost:9200
If Elasticsearch is running, you should see a JSON response with information about the cluster.
Check that the SSL/TLS certificate is valid and trusted by the Python client. You can use the openssl command to check the certificate:
openssl x509 -in http_ca.crt -text -noout
This will display detailed information about the certificate. Make sure that the Issuer and Subject fields match and that the Validity dates are correct.
Check that the firewall on the Azure VM is not blocking incoming traffic on port 9200. You can use the ufw command to check the firewall rules:
sudo ufw status
If port 9200 is not listed as "ALLOW", you can add a new rule:
sudo ufw allow 9200/tcp
Check that the Python client is using the correct ca_certs file. Make sure that the
CA_CERT
variable in your code points to the correct file location.
Check the Elasticsearch logs for any error messages that might indicate the cause of the connection problem. The logs are usually located in the logs directory of the Elasticsearch installation.
Hopefully, one of these steps will help you resolve the issue. Good luck!

Pymongo unable to read Certificate Authority file

I am trying to setup TLS encrypted connections to MongoDB database using PyMongo. I have 2 python binaries installation at 2 different locations. But, both have version: 3.6.8. For both of them I have installed PyMongo version: 4.1.1. Have completed the process for generating CA keys and server private keys. I then added the ca.pem to '/etc/pki/ca-trust/source/anchors/' and ran 'sudo update-ca-trust' to add the certificate authority in the operating system certificate store. Then, updated the mongod.conf file and restarted the mongod instance. I am able to connect to the mongo shell using this command
mongo --tls --host='server-host-name'
But, the main issue is I am able to connect to the database using one python package, but the other gives this error:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)
error=AutoReconnect('SSL handshake failed:....)]
The output of the below command is:
openssl version -d
OPENSSLDIR: "/etc/pki/tls"
One workaround to make the other python binary also work was to explicitly export the path in the environment variable
export SSL_CERT_FILE=/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
But, I want the other python binary to also look for the CAs in the appropriate directory automatically.
All these tests are performed locally and not through some remote connections (which would require the certificate paths to be specified explicitly). I wanted to know the internal working of pymongo.MongoClient specifically for TLS connections in detail. Basically, I wanted some understanding how does it fetch the CAFiles from the operating system certificate store.
Also, how do I increase the logging for pymongo, any workaround for this? Can someone help me debug this? I can add additional information if required. Thank you.

Python Requests SSLError with internal CA

My company operates its own internal CA for internal services, and I need to access hook up Ansible AWX [python] to talk to one of our internal services which uses a cert signed by this CA. Basically:
AWX spins up a container awx_task with /etc/pki/ca-trust/source/anchors mounted in, which contains the root CA cert. [double-checked]
update-ca-trust is run, bundling the CA cert into various things, including /etc/pki/tls/certs/ca-bundle.crt. [double-checked]
requests should use this bundle. There are no CA-related environment variables that I can find inside the container or on the host that would override this.
However when I trigger a test run of an Ansible play which runs inside of awx_task I get the error:
requests.exceptions.SSLError: HTTPSConnectionPool(host='vault.example.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618)'),))
On the host machine I can run
import requests
requests.get("https://vault.example.com")
and get a 200 response, and if I strace the process I can see it reading /etc/pki/tls/certs/ca-bundle.crt. But from inside awx_task I get the same requests.exceptions.SSLError as above. Unfortunately Docker won't let me run strace inside the container so I can't see what it's trying to read.
But if I modify the code to:
import requests
requests.get("https://vault.example.com", verify="/etc/pki/tls/certs/ca-bundle.crt")
I get a 200 response from inside the container.
What am I missing here?
The problem is what #Will noted, the current version of Requests uses the Certifi bundle which is entirely separate from OpenSSL. The bundle PEM actually lives somewhere in you Python site-packages dir.
Without modifying your code you can override this with the environment variable:
REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
Editorial: This is an absolutely ridiculous way to enforce CA trust. If you want to pare down your system trust, pare it down at the system level. I'm getting real sick of chasing down random PEM bundles scattered through source trees [which probably never get updated] just because some #devoops nutbag thinks he knows how to run systems better than actual ops and forking their bad ideas off to unsuspecting systems.
(ノಠ益ಠ)ノ彡┻━┻

Where does python library aiohttp/asyncio get its certificate store? [Ubuntu docker container]

I am running a docker container with Ubuntu as the base and am trying to add a new Certificate Authority to the project.
I'm not entirely sure what's failing, but I cannot seem to make it work. I followed the directions on this page: http://manpages.ubuntu.com/manpages/zesty/man8/update-ca-certificates.8.html by adding the CA file to a directory in /usr/share/ca-certificates, specifying the CA files in /etc/ca-certificates.conf, and then running update-ca-certificates, which completes with a message saying that it added 3 new certificates.
However, aiohttp is still printing the error
aiohttp.errors.ClientOSError: [Errno 1] Cannot connect to host www.myserver.com:443 ssl:True [Can not connect to www.myserver.com:443 [[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:719)]]
I was informed that aiohttp doesn't access a certificate store itself, but rather relies on asyncio which I think was absorbed into python itself recently. So I don't know if somewhere along the chain something is using a different certificate store, but I would just like to know where I can add my CA files so that they will work with aiohttp.

Let's encrypt certificate, Python and Windows

I changed my Webserver from HTTP to HTTPS with "Let"s Encrypt".
The Webserver contains an API, and I have an Python application, which uses the API.
Under Linux is all fine, but under Windows I receive this below, when I'm logging in.
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
My thought was, that the SSL certificate isn't installed.
So I downloaded the "isrgrootx1.der" and "lets-encrypt-x1-cross-signed.der" renamed both to the ending "*.cer".
Then I opened the Windows console, and run this:
certutil -addstore "Root" "isrgrootx1.cer".
certutil -addstore "Root" "lets-encrypt-x1-cross-signed.cer".
The second command failed, because it isn't a root certificate.
My question is: In which group has the "lets-encrypt-x1-cross-signed.cer" to be installed?
You shouldn't need to add "lets-encrypt-x1-cross-signed.cer" to your Windows machine, since it's only an intermediate certificate. And you shouldn't need to add "isrgrootx1.cer" either, since Let's Encrypt certificates chain to "DST Root X3", which is already included with Windows.
Most likely your web server was not configured to send the intermediate certificate. If you're using Certbot, for instance, you'll want to configure your web server using "fullchain.pem" rather than "cert.pem".

Categories