Python IBM_DB using SSL connection - python

I'm using Python on Centos 7 and I have installed GSK8Kit with DB2 11.3 client.
So I set:
IBM_DB_HOME=/path/to/my/db2client/sqllib - ODBC and clidriver
Also I set:
LD_LIBRARY_PATH = $IBM_DB_HOME/lib:$LD_LIBRARY_PATH
Then I installed ibm_db:
pip install ibm_db
I added my db2servercert.arm into mykeydb.kdb file, located /opt/IBM/db2/GSK8KitStore and I'm using the same version of GSK8Kit on client and server.
gsk8capicmd_64 -cert -add -db mykeydb.kdb -stashed -label "DB2 Server
self-signed certificate" -file db2servercert.arm -format ascii -trust enable
According to this IBM docs: https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.admin.sec.doc/doc/t0053518.html
From Db2 V10.5 FP5 onwards, the SSLClientKeystoredb and SSLClientKeystash keywords are not needed in the connection string, db2cli.ini file, FileDSN, or db2dsdriver.cfg file. If you have not set or passed values for the SSLClientKeystoreddb and SSLClientKeystash keywords, the CLI/ODBC client driver will create a default key database internally during the first SSL connection. The Client driver will call GSKit API's to create a key database populated with the default root certificates.
Now I'm trying to create ibm_db connection string for db2 SSL connection using various scenarios:
Security=ssl and SSLServerCertificate=/path/to/my/db2servercert.arm "Database=sampledb;Protocol=tcpip;Hostname=myhost;Servicename=50001;Security=ssl;SSLServerCertificate=/path/to/my/db2servercert.arm;"
SECURITY=SSL and SSLClientKeystoredb=/opt/IBM/db2/GSK8KitStore/mykeydb.kdb and SSLClientKeystash=/opt/IBM/db2/GSK8KitStore/mystashfile.sth
"Database=sampledb;Protocol=tcpip;Hostname=myhost;Servicename=50001;Security=ssl;SSLClientKeystoredb=/opt/IBM/db2/GSK8KitStore/mykeydb.kdb;SSLClientKeystash=/opt/IBM/db2/GSK8KitStore/mystashfile.sth;"
Security=ssl
"Database=sampledb;Protocol=tcpip;Hostname=myhost;Servicename=50001;Security=ssl;"
In 1) and 2) I was able to connect without any SSL error connections, but in 3) I'm getting Socket 414 error:
[IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: "SSL".
Communication API being used: "SOCKETS". Location where the error was detected: "".
Communication function detecting the error: "sqlccSSLSocketSetup". Protocol specific error code(s): "414", "", "". SQLSTATE=08001
That means:
https://www.ibm.com/support/knowledgecenter/en/SSAL2T_7.1.0/com.ibm.cics.tx.doc/reference/r_gskit_error_codes.html,
414 error: GSK_ERROR_BAD_CERT - Incorrectly formatted certificate received from partner.
Note: on another machine with the same config and ibm_db installed this connection string works (I'm sure I missed smth)
"Database=sampledb;Protocol=tcpip;Hostname=myhost;Servicename=50001;Security=ssl;"
My questions are:
Which env variables or db2 client parameters I have to configure to connect only with Security=ssl property?
How does ibm_db work under the hood, when trying to connect to db2 remote server and where I can find this root certificate based on which it automatically generate its own keydb.kdb file as mentioned in IBM docs?
Thx for any idea ;)

If you're using a self-signed SSL certificate, you can't connect without using options 1 or 2.
In option 1 you're supplying the certificate's public key directly, to allow the Db2 client to validate the Db2 server. This is already using the "in memory keystore" that you're asking about in question #2.
In option 2, you would have imported the same public key into your keystore to allow the Db2 client to validate the server.
If you want to connect using only Security=SSL, your Db2 server's SSL certificate needs to come from one of the CAs already in the system keystore.

I believe that when the Db2-documentation writes "The Client driver will call GSKit API's to create a key database populated with the default root certificates", it means that the dynamically created kdb will contain the certs for some common commercial CAs, and (if specified) will also contain the cert specified by SSLServerCertificate.
As you are using a self-signed certificate, the CA certs will be ignored in this case.
If you are connecting to a Db2-server that runs on Linux/Unix/Windows, using IBM's drivers, and want an encrypted connection that uses the target Db2-instance public-key as part of the encryption, then you must tell the Db2-client the location of that certificate (which contains the Db2-instance public key) in one way or another.
For a linux client, thay cert will either be in a statically created kdb (via GSKit commands), or in a dynamically created kdb as specified by using the SSLServerCertificate property. For a Db2-client running on Microsoft Windows the certificate can additionally be fetched from the MS keystore if Db2-client is configured to use that.
The source code for ibm_db module is available on github. However, the client-side SSL work happens not in ibm_db module but instead happens in the (closed source) Db2-driver along with (closed source) libraries for GSKit. To see some of what's happening under the covers you can trace the CLI driver. Refer to the Db2-documentation online for details of CLI tracing.

Related

Connecting to Elasticsearch via python

I am running elasticsearch-8.6.1 with default settings on an Azure VM, with port 5601 open. This is a dev server with only one cluster. I am able to start Elasticsearch, Kibana and Logstash services and view them via a browser.
I have a some python code which is trying to connect to ElasticSearch using the recommended route of verifying https through the ca_certification route as per https://www.elastic.co/guide/en/elasticsearch/client/python-api/master/connecting.html
I have copied the http_ca.crt file from the VM onto my local machine and made it accessible.
es = Elasticsearch('https://localhost:9200',
ca_certs=CA_CERT,
basic_auth=(USER_ID,ELASTIC_PASSWORD))
Elasticsearch.yml has the following enabled
network.host: 0.0.0.0
http.host: 0.0.0.0
xpack.security.enabled: true
I appreciate that I can turn off security, but this isn't a sustainable approach moving forward.
The error I am getting is
elastic_transport.ConnectionError: Connection error caused by:
ConnectionError(Connection error caused by:
NewConnectionError(<urllib3.connection.HTTPSConnection object at
0x000001890CEF3730>: Failed to establish a new connection: [WinError
10061] No connection could be made because the target machine actively
refused it))
I suspect there is some configuration setting that I am missing somewhere.
Thanks in advance for any advise or pointers that can be offered.
The error message suggests that the Python code is unable to establish a connection to Elasticsearch on the specified host and port. There could be several reasons for this, including network configuration issues or problems with SSL/TLS certificates.
Here are some things you could try to troubleshoot the issue:
Check that Elasticsearch is running and listening on the correct host and port. You can use the curl command to test the connection:
curl -k https://localhost:9200
If Elasticsearch is running, you should see a JSON response with information about the cluster.
Check that the SSL/TLS certificate is valid and trusted by the Python client. You can use the openssl command to check the certificate:
openssl x509 -in http_ca.crt -text -noout
This will display detailed information about the certificate. Make sure that the Issuer and Subject fields match and that the Validity dates are correct.
Check that the firewall on the Azure VM is not blocking incoming traffic on port 9200. You can use the ufw command to check the firewall rules:
sudo ufw status
If port 9200 is not listed as "ALLOW", you can add a new rule:
sudo ufw allow 9200/tcp
Check that the Python client is using the correct ca_certs file. Make sure that the
CA_CERT
variable in your code points to the correct file location.
Check the Elasticsearch logs for any error messages that might indicate the cause of the connection problem. The logs are usually located in the logs directory of the Elasticsearch installation.
Hopefully, one of these steps will help you resolve the issue. Good luck!

Pymongo unable to read Certificate Authority file

I am trying to setup TLS encrypted connections to MongoDB database using PyMongo. I have 2 python binaries installation at 2 different locations. But, both have version: 3.6.8. For both of them I have installed PyMongo version: 4.1.1. Have completed the process for generating CA keys and server private keys. I then added the ca.pem to '/etc/pki/ca-trust/source/anchors/' and ran 'sudo update-ca-trust' to add the certificate authority in the operating system certificate store. Then, updated the mongod.conf file and restarted the mongod instance. I am able to connect to the mongo shell using this command
mongo --tls --host='server-host-name'
But, the main issue is I am able to connect to the database using one python package, but the other gives this error:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)
error=AutoReconnect('SSL handshake failed:....)]
The output of the below command is:
openssl version -d
OPENSSLDIR: "/etc/pki/tls"
One workaround to make the other python binary also work was to explicitly export the path in the environment variable
export SSL_CERT_FILE=/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
But, I want the other python binary to also look for the CAs in the appropriate directory automatically.
All these tests are performed locally and not through some remote connections (which would require the certificate paths to be specified explicitly). I wanted to know the internal working of pymongo.MongoClient specifically for TLS connections in detail. Basically, I wanted some understanding how does it fetch the CAFiles from the operating system certificate store.
Also, how do I increase the logging for pymongo, any workaround for this? Can someone help me debug this? I can add additional information if required. Thank you.

PYODBC + MS SQL SERVER connection with Encrypt=yes not connecting

We have a python flask app running on an aws centos ECS instance. We are trying to establish an encrypted connection to our database via PYODBC with odbc 17 on Linux. When running locally we just use the SQL server driver. Currently we have the code:
params = urllib.parse.quote_plus(driver;server;user;pwd;...;Encrypt=yes)
SQLALCHEMY_DATABASE_URI="mssql+PYODBC:///?odbc_connect=%s" %params
We have tls enabled on the server. The connection works locally on windows but not deployed in Linux.
Currently doing a deployment with 'yes' instead of 'true'. We are also about to try with 'trustedserverconnection=yes'. Any insight on this process would be greatly appreciated!
Update: latest error, invalid connection string attribute 'trustservercertificate'
We ended up implementing a second connection param:
TrustServerCertificate=YES
Which is not ideal, obviously, because we want to have good security implementation practices. In future state we will need to set this to false and put our ssl pem file in the Linux ssl store.
Hope this helps someone. Had some issues finding documentation for pyodbc with MS SQL Server.
According to this documentation, pyodbc passes the connection string through to the underlying ODBC driver. Microsoft's
article Using Connection String Keywords with SQL Server Native Client
documents both the Encrypt and TrustServerCertificate attributes. The TrustServerCertificate setting should generally be avoided in production databases; however, it is very useful when testing encrypted connections to a development database that is using a self-signed certificate. For example, the default installation of SQL Server uses a self-signed certificate and will require this setting.
In my mssql+pyodbc connection strings I just append ?Encrypt=yes&TrustServerCertificate=yes as appropriate. Please note, if you already have another setting after a question mark ? then use & instead of ?, for example: ?Trusted_Connection=yes&Encrypt=yes&TrustServerCertificate=yes

sqlalchemy with db2 and kerberos

How can I connect to my db2 database with sqlalchemy when the authentication is using kerberos?
When using pyodbc the connection string contains AuthenticationMethod=4, which lets kerberos handle the authentication and I don't need to provide username and password.
Is there a way to either pass a pyodbc.connect object directly into sqlalchemy or can I alternatively tell sqlalchemy to use kerberos?
My odbc connection string looks like this:
connstr = 'ApplicationUsingThreads=0;' \
...: 'FloatingPointParameters=0;' \
...: 'DoubleToStringPrecision=16;DB=NYRMPDI1;' \
...: 'AuthenticationMethod=4;' \
...: f'IpAddress={ip_address};' \
...: f'TcpPort={port};' \
...: f'DRIVER={driver_location}'
I can't find any way to pass this into sqlalchemy create_engine.
ibm_db_sa with an IBM Db2 driver supports kerberos connections with pyodbc, both DSN-LESS and DSN connection-strings, and it works with all three types of IBM Db2-driver (fat client, run-time-client, and ODBC and CLI driver). Different configurations are necessary for the fat-client+runtime-client, versus the ODBC and CLI client.
By default, unless you tell it otherwise, the installation of ibm_db_sa or ibm_db modules will install the IBM 'ODBC and CLI client'.
Your odbcinst.ini needs to define a driver-name (in my example I call it DB2CLI but you give it any name you prefer), and specify the library to load (example libdb2.so) from the correct path.
Here is an example of a DSN-LESS connection string, which you must urlencode before passing to create_engine():
CONNECTION_STRING=("DRIVER={DB2CLI};HOSTNAME=192.168.1.178;PORT=60000;KRBPLUGIN=IBMkrb5;AUTHENTICATION=KERBEROS;DATABASE=SAMPLE;")
quoted_connection_string=urllib.parse.quote_plus(CONNECTION_STRING)
engine = create_engine('ibm_db_sa+pyodbc:///?odbc_connect={}'.format(quoted_connection_string))
If you prefer a DSN connection, you must define all the details in the db2dsdriver.cfg and have a stanza for the dsn in the active odbc.ini that references the driver you configured in your odbcinst.ini, and you must specify only the DSN in the connection-string like this:
CONNECTION_STRING=("DSN=SAMPLE;")
engine = create_engine('ibm_db_sa+pyodbc:///?odbc_connect={}'.format(CONNECTION_STRING))
For DSN connections, it helps if you first get the kerberos connection working with isql defore you get it working with sqlalchemy because the troubleshooting seems easier.
I tested with these component versions:
ubuntu 16.04 LTS x64
python 3.6.8 in a virtualenv
ibm_db 3.0.1
ibm_db_sa 0.3.5
unixODBC 2.3.4
pyodbc 4.0.30
IBM Db2 data server driver 11.1.4.4a (optional)
IBM Db2 ODBC and CLI driver (default)
local and remote Db2-LUW servers whose Db2-instances are kerberized already.
Steps to try:
For DSN connections, configure your active db2dsdriver.cfg with dsn and database with parameter Authentication, parameter value Kerberos.
For the fat-client and runtime-client, configure your IBM Data Server Client CLNT_KRB_PLUGIN parameter to IBMkrb5 via db2 update dbm cfg using CLNT_KRB_PLUGIN IBMkrb5. (You don't need this step when using the ODBC and CLI driver).
Configure your active odbcinst.ini for Db2 to use the correct libdb2.so library as supplied by your Db2 client, and reference this driver-name either in your DSN-LESS python code, or in your odbc.ini for DSN-connections.
For DSN connections only, configure your active odbc.ini to use the Db2 driver specified in odbcinst.ini and mention Authentication = kerberos in your DSN stanza in odbc.ini.
For DSN connections, Omit any userid/password from the active odbc.ini file. For DSN-LESS connectiond you don't need any reference to the database in the odbc.ini or db2dsdriver.cfg.
For DSN connections only, Verify db2cli validate -dsn $YOURDSN -connect for a remote database completes successfully without a userid or password. This proves that the CLI layer is using kerberos.
(Optional) For Db2 fat client, or runtime client, verify you can connect to a catalogued remote database at the shell command line db2 connect to $YOUR_REMOTE_DATABASE (without needing to enter a userid/password). This proves that regular shell scripts can connect to the database with kerberos authentication.
If you are using either the Db2 fat client, or the Db2 runtime client then you need to dot in / source the correct db2profile before running either isql or your python script.

'ORA-21561: OID generation failed for remote Oracle 12c XE Instance - Oracle on Windows 10 client (cx_Oracle using python) on Mac

I am trying to connect to the Oracle instance which is running on Windows 10 through python using cx_Oracle package from a mac machine.
Now while connecting it throw below error.
'ORA-21561: OID generation failed\n'
My Sample code:
import cx_Oracle
DSN = cx_Oracle.makedsn(host=server, port=port, service_name=database)
# Below is the DNS
# (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.9)(PORT=50244))(CONNECT_DATA=(SERVICE_NAME=devXDB)))
con = cx_Oracle.Connection(user, password, DSN)
However I am able to connect from same machine (mac) using SQL developer and PyCharm's database browser. I searched across and did not find any solution related to remote instance. The solutions suggested for seems to be working only for the local instances in which one has to edit/update etc/hosts or related file on windows 10.
Thanks in advance.
This was indeed the problem of /etc/hosts file issue.
One thing to note here even if the oracle instance is running on a remote machine you client machine's (from where you are connecting to the oracle instance) /etc/hosts file should have the entry like this.
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost localhost.localdomain Amits-iMac.local
Replace 'Amits-iMac.local' to your client's hostname.

Categories