Connecting to Elasticsearch via python - python

I am running elasticsearch-8.6.1 with default settings on an Azure VM, with port 5601 open. This is a dev server with only one cluster. I am able to start Elasticsearch, Kibana and Logstash services and view them via a browser.
I have a some python code which is trying to connect to ElasticSearch using the recommended route of verifying https through the ca_certification route as per https://www.elastic.co/guide/en/elasticsearch/client/python-api/master/connecting.html
I have copied the http_ca.crt file from the VM onto my local machine and made it accessible.
es = Elasticsearch('https://localhost:9200',
ca_certs=CA_CERT,
basic_auth=(USER_ID,ELASTIC_PASSWORD))
Elasticsearch.yml has the following enabled
network.host: 0.0.0.0
http.host: 0.0.0.0
xpack.security.enabled: true
I appreciate that I can turn off security, but this isn't a sustainable approach moving forward.
The error I am getting is
elastic_transport.ConnectionError: Connection error caused by:
ConnectionError(Connection error caused by:
NewConnectionError(<urllib3.connection.HTTPSConnection object at
0x000001890CEF3730>: Failed to establish a new connection: [WinError
10061] No connection could be made because the target machine actively
refused it))
I suspect there is some configuration setting that I am missing somewhere.
Thanks in advance for any advise or pointers that can be offered.

The error message suggests that the Python code is unable to establish a connection to Elasticsearch on the specified host and port. There could be several reasons for this, including network configuration issues or problems with SSL/TLS certificates.
Here are some things you could try to troubleshoot the issue:
Check that Elasticsearch is running and listening on the correct host and port. You can use the curl command to test the connection:
curl -k https://localhost:9200
If Elasticsearch is running, you should see a JSON response with information about the cluster.
Check that the SSL/TLS certificate is valid and trusted by the Python client. You can use the openssl command to check the certificate:
openssl x509 -in http_ca.crt -text -noout
This will display detailed information about the certificate. Make sure that the Issuer and Subject fields match and that the Validity dates are correct.
Check that the firewall on the Azure VM is not blocking incoming traffic on port 9200. You can use the ufw command to check the firewall rules:
sudo ufw status
If port 9200 is not listed as "ALLOW", you can add a new rule:
sudo ufw allow 9200/tcp
Check that the Python client is using the correct ca_certs file. Make sure that the
CA_CERT
variable in your code points to the correct file location.
Check the Elasticsearch logs for any error messages that might indicate the cause of the connection problem. The logs are usually located in the logs directory of the Elasticsearch installation.
Hopefully, one of these steps will help you resolve the issue. Good luck!

Related

Pymongo unable to read Certificate Authority file

I am trying to setup TLS encrypted connections to MongoDB database using PyMongo. I have 2 python binaries installation at 2 different locations. But, both have version: 3.6.8. For both of them I have installed PyMongo version: 4.1.1. Have completed the process for generating CA keys and server private keys. I then added the ca.pem to '/etc/pki/ca-trust/source/anchors/' and ran 'sudo update-ca-trust' to add the certificate authority in the operating system certificate store. Then, updated the mongod.conf file and restarted the mongod instance. I am able to connect to the mongo shell using this command
mongo --tls --host='server-host-name'
But, the main issue is I am able to connect to the database using one python package, but the other gives this error:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)
error=AutoReconnect('SSL handshake failed:....)]
The output of the below command is:
openssl version -d
OPENSSLDIR: "/etc/pki/tls"
One workaround to make the other python binary also work was to explicitly export the path in the environment variable
export SSL_CERT_FILE=/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
But, I want the other python binary to also look for the CAs in the appropriate directory automatically.
All these tests are performed locally and not through some remote connections (which would require the certificate paths to be specified explicitly). I wanted to know the internal working of pymongo.MongoClient specifically for TLS connections in detail. Basically, I wanted some understanding how does it fetch the CAFiles from the operating system certificate store.
Also, how do I increase the logging for pymongo, any workaround for this? Can someone help me debug this? I can add additional information if required. Thank you.

Python celery connect through ssl

I have been trying to connect to a RabbitMQ (it was created from AWS Messaging Service if it matters) instance via celery 5.0.5. The connection link starts as follows amqps://user:password#..../. I receive the following error when running my python script:
consumer: Cannot connect to amqps://sessionstackadmin:**#b-0482d011-0cca-40bd-968e-c19d6c85e2a9.mq.eu-central-1.amazonaws.com:5671//: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
I am running the script from a docker container with python 3.6.12. The docker container has access to the endpoint (at least it can telnet to it). I have the feeling that the python process does not respect the distro certificate chain and it just fails verifying the certificate.
I solved it! Celery is using Kombu, which is using py-amqp and it happens that the latest version 5.0.3 from Jan 19 is broken.
My GH ticket https://github.com/celery/py-amqp/issues/349
Solution: add amqp==5.0.2 as a hard dependency in your project requirements.
Fix at: git+git://github.com/celery/py-amqp.git#0b8a832d32179d33152d886acd6f081f25ea4bf2
I am leaving the workaround that "fixes" this. For some reason the kombu library when trying to handle ssl connections does not respect the default CA certs coming with your distribution. This is basically handled by https://docs.python.org/3/library/ssl.html#ssl.create_default_context which the library does not use. Sadly it does not allow to pass in a custom SSLContext but only a set of options that will be later passed down to the context. One such options is broker_use_ssl. By settings it to {'ca_certs': '/etc/ssl/certs/ca-certificates.crt'} it will respect the CA certs from the distribution (keep in mind that I am using an ubuntu/debian based image and the CA certs configuration file resides there, if you are using another distro check out the proper location for your CA certs).

Automate Global Protect VPN connection in Python

I was asked to use Python to automate processes that download files from multiple servers. In order to connect to the servers, I must connect to the Global Protect VPN first. That said, in order to automate the process, I must also automate the VPN connection/disconnection. I tried to search for information about how to automate the GP VPN connection in Python but couldn't find any helpful posts. Could anyone please help with it? Thank you!
You can use global protect from CLI so I guess it`s easy to call the CLI commands that you need from python.
On my ubuntu system, if I want to launch the GUI I can type in my terminal:
globalprotect launch-ui
If I want to connect to a VPN server from CLI (without launching the UI) I can use:
globalprotect connect --portal <gp-portal>
You can find more information here: Palo Alto GlobalProtect.
To use the above CLI from python: Call shell/CLI from python.
Also, keep in mind:
When you use certificate-based authentication, the first time you connect without a root CA certificate, the GlobalProtect app and GlobalProtect portal exchange certificates. The GlobalProtect app displays a certificate error, which you must acknowledge before you authenticate. When you next connect, you will not be prompted with the certificate error message.
If that is the case for you, you can specify the location of the certificate:
globalprotect import-certificate --location /home/mydir/Downloads/cert_client_cert.p12
Refer to 1 for more CLI commands.

Python IBM_DB using SSL connection

I'm using Python on Centos 7 and I have installed GSK8Kit with DB2 11.3 client.
So I set:
IBM_DB_HOME=/path/to/my/db2client/sqllib - ODBC and clidriver
Also I set:
LD_LIBRARY_PATH = $IBM_DB_HOME/lib:$LD_LIBRARY_PATH
Then I installed ibm_db:
pip install ibm_db
I added my db2servercert.arm into mykeydb.kdb file, located /opt/IBM/db2/GSK8KitStore and I'm using the same version of GSK8Kit on client and server.
gsk8capicmd_64 -cert -add -db mykeydb.kdb -stashed -label "DB2 Server
self-signed certificate" -file db2servercert.arm -format ascii -trust enable
According to this IBM docs: https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.admin.sec.doc/doc/t0053518.html
From Db2 V10.5 FP5 onwards, the SSLClientKeystoredb and SSLClientKeystash keywords are not needed in the connection string, db2cli.ini file, FileDSN, or db2dsdriver.cfg file. If you have not set or passed values for the SSLClientKeystoreddb and SSLClientKeystash keywords, the CLI/ODBC client driver will create a default key database internally during the first SSL connection. The Client driver will call GSKit API's to create a key database populated with the default root certificates.
Now I'm trying to create ibm_db connection string for db2 SSL connection using various scenarios:
Security=ssl and SSLServerCertificate=/path/to/my/db2servercert.arm "Database=sampledb;Protocol=tcpip;Hostname=myhost;Servicename=50001;Security=ssl;SSLServerCertificate=/path/to/my/db2servercert.arm;"
SECURITY=SSL and SSLClientKeystoredb=/opt/IBM/db2/GSK8KitStore/mykeydb.kdb and SSLClientKeystash=/opt/IBM/db2/GSK8KitStore/mystashfile.sth
"Database=sampledb;Protocol=tcpip;Hostname=myhost;Servicename=50001;Security=ssl;SSLClientKeystoredb=/opt/IBM/db2/GSK8KitStore/mykeydb.kdb;SSLClientKeystash=/opt/IBM/db2/GSK8KitStore/mystashfile.sth;"
Security=ssl
"Database=sampledb;Protocol=tcpip;Hostname=myhost;Servicename=50001;Security=ssl;"
In 1) and 2) I was able to connect without any SSL error connections, but in 3) I'm getting Socket 414 error:
[IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: "SSL".
Communication API being used: "SOCKETS". Location where the error was detected: "".
Communication function detecting the error: "sqlccSSLSocketSetup". Protocol specific error code(s): "414", "", "". SQLSTATE=08001
That means:
https://www.ibm.com/support/knowledgecenter/en/SSAL2T_7.1.0/com.ibm.cics.tx.doc/reference/r_gskit_error_codes.html,
414 error: GSK_ERROR_BAD_CERT - Incorrectly formatted certificate received from partner.
Note: on another machine with the same config and ibm_db installed this connection string works (I'm sure I missed smth)
"Database=sampledb;Protocol=tcpip;Hostname=myhost;Servicename=50001;Security=ssl;"
My questions are:
Which env variables or db2 client parameters I have to configure to connect only with Security=ssl property?
How does ibm_db work under the hood, when trying to connect to db2 remote server and where I can find this root certificate based on which it automatically generate its own keydb.kdb file as mentioned in IBM docs?
Thx for any idea ;)
If you're using a self-signed SSL certificate, you can't connect without using options 1 or 2.
In option 1 you're supplying the certificate's public key directly, to allow the Db2 client to validate the Db2 server. This is already using the "in memory keystore" that you're asking about in question #2.
In option 2, you would have imported the same public key into your keystore to allow the Db2 client to validate the server.
If you want to connect using only Security=SSL, your Db2 server's SSL certificate needs to come from one of the CAs already in the system keystore.
I believe that when the Db2-documentation writes "The Client driver will call GSKit API's to create a key database populated with the default root certificates", it means that the dynamically created kdb will contain the certs for some common commercial CAs, and (if specified) will also contain the cert specified by SSLServerCertificate.
As you are using a self-signed certificate, the CA certs will be ignored in this case.
If you are connecting to a Db2-server that runs on Linux/Unix/Windows, using IBM's drivers, and want an encrypted connection that uses the target Db2-instance public-key as part of the encryption, then you must tell the Db2-client the location of that certificate (which contains the Db2-instance public key) in one way or another.
For a linux client, thay cert will either be in a statically created kdb (via GSKit commands), or in a dynamically created kdb as specified by using the SSLServerCertificate property. For a Db2-client running on Microsoft Windows the certificate can additionally be fetched from the MS keystore if Db2-client is configured to use that.
The source code for ibm_db module is available on github. However, the client-side SSL work happens not in ibm_db module but instead happens in the (closed source) Db2-driver along with (closed source) libraries for GSKit. To see some of what's happening under the covers you can trace the CLI driver. Refer to the Db2-documentation online for details of CLI tracing.

Connect GAE Remote API to dev_appserver.py

I want to execute a Python script that connects to my local dev_appserver.py instance to run some DataStore queries.
The dev_appserver.py is running with:
builtins:
- remote_api: on
As per https://cloud.google.com/appengine/docs/python/tools/remoteapi I have:
remote_api_stub.ConfigureRemoteApiForOAuth(
hostname,
'/_ah/remote_api'
)
In the Python script, but what should the hostname be set to?
For example, when dev_appserver.py started, it prints:
INFO 2016-10-18 12:02:16,850 api_server.py:205] Starting API server at: http://localhost:56700
But I set the value to localhost:56700, I get the following error:
httplib2.SSLHandshakeError: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:590)
(Same error for any port that has anything running on it - e.g. 8000, 8080, etc).
If anyone has managed to get this to run successfully, what hostname did you use?
Many thanks,
Ned
The dev_appserver.py doesn't support SSL (I can't find the doc reference anymore), so it can't answer https:// requests.
You could try using http-only URLs (not sure if possible with the remote API - I didn't use it yet, may need to disable handler secure option in app.yaml config files).
At least on my devserver I am able to direct my browser to the http-only API server URL reported by devserver.py at startup and I see {app_id: dev~my_app_name, rtok: '0'}.
Or you could setup a proxy server, see GAE dev_appserver.py over HTTPS.

Categories