Using credentials from Db2 (Warehouse) on Cloud to initialize flask-sqlalchemy - python

In a Flask app with flask-sqlalchemy, I am trying to initialize a connection to Db2 Warehouse on Cloud by setting SQLALCHEMY_DATABASE_URI to one of the parts provided in the service credentials. In the past, going with the uri component worked fine, but my new service has SSL connections only.
app.config['SQLALCHEMY_DATABASE_URI']=dbInfo['uri']
This results in connection errors
File "/home/vcap/deps/0/python/lib/python3.6/site-packages/ibm_db_dbi.py", line 592, in connect
conn = ibm_db.connect(dsn, '', '', conn_options)
Exception: [IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: "TCP/IP". Communication API being used: "SOCKETS". Location where the error was detected: "52.117.199.197". Communication function detecting the error: "recv". Protocol specific error code(s): "104", "*", "0". SQLSTATE=08001 SQLCODE=-30081 During handling of the above exception, another exception occurred:
It seems that the driver is not accepting the ssl=true option specified in the URI string. What parts of the service credentials should I use? Would I need to build the URI string manually?

This is only a partial answer because of a workaround. I am using the port information from the service credentials to modify the connection URI:
if dbInfo['port']==50001:
# if we are on the SSL port, add additional parameter for the driver
app.config['SQLALCHEMY_DATABASE_URI']=dbInfo['uri']+"Security=SSL;"
else:
app.config['SQLALCHEMY_DATABASE_URI']=dbInfo['uri']
By adding Security=SSL to the uri, the driver picks up the info on SSL and uses the correct settings to connect to Db2.

Related

Unable to connect NEO4J using neo4j python driver

I'm unable to connect to Neo4j using neo4j-python-driver version 5.3.0.
Requirement is using Neo4j python driver to query NEO4J DB using cypher queries.
It gives the error failed to connect to server even when the database is up and running and i can able to login and use through NEO4J Desktop.
Getting below error
neo4j.exceptions.ServiceUnavailable: Couldn't connect to <URI>:7687 (resolved to ()):
[SSLCertVerificationError] Connection Failed. Please ensure that your database is listening on the correct host and port and that you have enabled encryption if required. Note that the default encryption setting has changed in Neo4j 4.0. See the docs for more information. Failed to establish encrypted connection. (code 1: Operation not permitted)
Note : URI hided in the above error.
I have added the exception to Ignores certificate verification issue, but it won't solve the issue.
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
Appreciate your help to resolve the issue.
i'm connecting via below snippet
from neo4j import GraphDatabase
import urllib3
#Ignores certificate verification issue
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
# Initialize connection to database
driver = GraphDatabase.driver('bolt+s://<URI>:7687', auth=(',username', '<password'))
query = 'match (n:Event{name:"test"}) return n'
#Run Cypher query
with driver.session() as session:
info = session.run(query)
for item in info:
print(item)
No way to know how you are connecting, but we do:
from neo4j import GraphDatabase
address="bolt://localhost:7687"
auth=('neo4j', "password")
driver = GraphDatabase.driver(address, auth=auth, encrypted=False)
....
Have you tried py2neo?
I use below with dev running in docker and prod running Aura.
from py2neo import Graph
self.graph = Graph(os.getenv('DB_URI'), auth=(
os.getenv('DB_USER'), os.getenv('DB_PASS')))
DB_URI is 'neo4j://0.0.0.0:7687' on dev
and 'neo4j+s://xxxx' on prod

Access Azure EventHub with WebSocket and proxy

I'm trying to access Azure EvenHub but my network makes me use proxy and allows connection only over https (port 443)
Based on https://learn.microsoft.com/en-us/python/api/azure-eventhub/azure.eventhub.aio.eventhubproducerclient?view=azure-python
I added proxy configuration and TransportType.AmqpOverWebsocket parametr and my Producer looks like this:
async def run():
producer = EventHubProducerClient.from_connection_string(
"Endpoint=sb://my_eh.servicebus.windows.net/;SharedAccessKeyName=eh-sender;SharedAccessKey=MFGf5MX6Mdummykey=",
eventhub_name="my_eh",
auth_timeout=180,
http_proxy=HTTP_PROXY,
transport_type=TransportType.AmqpOverWebsocket,
)
and I get an error:
File "/usr/local/lib64/python3.9/site-packages/uamqp/authentication/cbs_auth_async.py", line 74, in create_authenticator_async
raise errors.AMQPConnectionError(
uamqp.errors.AMQPConnectionError: Unable to open authentication session on connection b'EHProducer-a1cc5f12-96a1-4c29-ae54-70aafacd3097'.
Please confirm target hostname exists: b'my_eh.servicebus.windows.net'
I don't know what might be the issue.
Might it be related to this one ? https://github.com/Azure/azure-event-hubs-c/issues/50#issuecomment-501437753
you should be able to set up a proxy that the SDK uses to access EventHub. Here is a sample that shows you how to set the HTTP_PROXY dictionary with the proxy information. Behind the scenes when proxy is passed in, it automatically goes over websockets.
As #BrunoLucasAzure suggested checking the ports on the proxy itself will be good to check, because based on the error message it looks like it made it past the proxy and cant resolve the endpoint.

AWS RDS Proxy error (postgres) - RDS Proxy currently doesn’t support command-line options

I try to read or write from/to an AWS RDS Proxy with a postgres RDS as the endpoint.
The operation works with psql but fails on the same client with pg8000 or psycopg2 as client libraries in Python.
The operation works with with pg8000 and psycopg2 if I use the RDS directly as endpoint (without the RDS proxy).
sqlaclchemy/psycopg2 error message:
Feature not supported: RDS Proxy currently doesn’t support command-line options.
A minimal version of the code I use:
from sqlalchemy import create_engine
import os
from dotenv import load_dotenv
load_dotenv()
login_string = os.environ['login_string_proxy']
engine = create_engine(login_string, client_encoding="utf8", echo=True, connect_args={'options': '-csearch_path={}'.format("testing")})
engine.execute(f"INSERT INTO testing.mytable (product) VALUES ('123')")
pg8000: the place it stops / waits for something is in core.py:
def sock_read(b):
try:
return self._sock.read(b)
except OSError as e:
raise InterfaceError("network error on read") from e
A minimal version of the code I use:
import pg8000
import os
from dotenv import load_dotenv
load_dotenv()
db_connection = pg8000.connect(database=os.environ['database'], host=os.environ['host'], port=os.environ['port'], user=os.environ['user'], password=os.environ['password'])
db_connection.run(f"INSERT INTO mytable (data) VALUES ('data')")
db_connection.commit()
db_connection.close()
The logs in the RDS Proxy looks always normal for all the examples I mentioned - e.g.:
A new client connected from ...:60614.
Received Startup Message: [username="", database="", protocolMajorVersion=3, protocolMinorVersion=0, sslEnabled=false]
Proxy authentication with PostgreSQL native password authentication succeeded for user "" with TLS off.
A TCP connection was established from the proxy at ...:42795 to the database at ...:5432.
The new database connection successfully authenticated with TLS off.
I opened up all ports via security groups on the RDS and the RDS proxy and I used an EC2 inside the VPC.
I tried with autocommit on and off.
The 'command-line option" being referred to is the -csearch_path={}.
Remove that, and then once the connection is established execute set search_path = whatever as your first query.
This is a known issue that pg8000 can't connect to AWS RDS proxy (postgres). I did a PR https://github.com/tlocke/pg8000/pull/72 let see if Tony Locke (the father of pg8000) approves the change. ( if not you have to change the lines of the core.py https://github.com/tlocke/pg8000/pull/72/files )
self._write(FLUSH_MSG)
if (code != PASSWORD):
self._write(FLUSH_MSG)

Couchbase 4.0 Python SDK authentication error on passwordless bucket

I am experiencing issue when I am trying to retrieve documents based on the view.
result = bucket.query("view","view", limit=1, streaming=True)
for row in result:
bucket.replace("aass_brewery-juleol", row.docid)
The exception:
couchbase.exceptions._AuthError_0x2 (generated, catch AuthError): <Key=u'aass_brewery-juleol', RC=0x2[Authentication failed. You may have provided an invalid username/password combination], Operational Error, Results=1, C Source=(src/multiresult.c,309)>
The bucket doesn't have any authentication beside Standard port (TCP port 11211. Needs SASL auth.)

Failed to Login as 'Domain\ComputerName' pyodbc with py2exe

Ok so I have a script that connects to a mssql db and i need to run as a service which I have already accomplished but when I run it as a service it overrides my credentials that I have put in when i connect to the db with the ad computer account.
It runs perfect when i run it on its own and not as a service.
My Connection String is:
'DRIVER={SQL Server};SERVER=MyServer;DATABASE=MyDB;UID=DOMAIN\myusername;PWD=A;Trusted_Connection=True'
The Error is:
Error: ('28000', "[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user 'DOMAIN\COMPUTERNAME')
Any Advice?
In the last project I worked on, I found that DRIVER={SQL Server};SERVER=SERVERNAME;DATABASE=DBName is sufficient to initiate a db connection in trusted mode.
If it still does not work, it is probably either
1) the account DEEPTHOUGHT on mssql server is not set up properly.
2) the runAs in the service is not set up properly (why error message mentions 'ComputerName' instead of 'DEEPTHOUGHT'?)
The following connection string will use Windows authentication, using the account running the service to authenticate with the database. Change the service account to one that has database access:
'DRIVER={SQL Server};SERVER=SERVERNAME;DATABASE=DBName;Trusted_Connection=yes'
To change service account:
Start -> Run -> services.msc
Right-click service -> Properties
Log On tab
OK/Apply to save changes

Categories