I'm facing a problem with authenticating to RDS from Python 3.9 Lambda using IAM Roles with either mysql-connector-python 8.0.27 or PyMySQL 1.0.2 modules. In both cases the problem is that token is not passed inside connection string
In case of mysql.connect I'm using this construct:
os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'
token = client.generate_db_auth_token(DBHostname=hostname, Port=port, DBUsername=username)
mysql.connector.connect(host=hostname, user=username, passwd=token, database=dbname, auth_plugin='mysql_clear_password')
but getting ubyte format requires 0 <= number <= 255 error which leads me to believe that token string is too long and lib can't support it
So I tried switching to PyMySQL, but now it fails with "Access denied for user 'mydbuser'#'172.16.1.246' (using password: NO)")
os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'
....
token = client.generate_db_auth_token(DBHostname=hostname, Port=port, DBUsername=username)
my_db = pymysql.connect(
host=hostname,
user=username,
password=token,
database=dbname,
connect_timeout=2)
Token is generated and I see it in logs, IAM roles/policies are correct as I'm able to connect from Lambda using standard password, or using mysql CLI client on EC2 with token (MySQL community client, not MariaDB because it doesn't support --enable-cleartext-plugin option)
I even hardcoded the token, and still it fails with "no password"
UPDATE #1:
After more experiments I took a token that was generated from AWS CLI aws rds generate-db-auth-token --hostname my-mysql-db.us-east-2.rds.amazonaws.com --port 3306 --region us-east-2 --username mydbuser put it in mysql-connector-python's code and it magically worked!
I've compared two tokens and the difference is that aws rds generate-db-auth-token returns header called X-Amz-Signature 64 bytes long, so mysql-connector processes it fine.
However, boto3 client.generate_db_auth_token returns header called X-Amz-Security-Token 820 bytes long and it utterly destroys poor mysql-connector
A bug? A feature? Is it possible to get 64-bit header?
PyMySQL didn't work with either of tokens, so probably a bug in that library itself
UPDATE #2:
I've decided to test the tokens themselves on mysql client and got quite confusing results:
When using token made from Long-term AWS access key (the one which starts with 'AKIA') and generated via aws rds generate-db-auth-token I was able to connect to DB
When using token from lambda's client.generate_db_auth_token - that token didn't work in mysql CLI either - getting ERROR 1045 (28000): Access denied for user 'lambdauser'#'172.16.1.20' (using password: YES) - The main difference is that Lambda is using Short-term access key (starts with 'ASIA')
Could it be that lambda's access key is somehow out of sync? Or Lambda needs extra roles/policies to make tokens trusted by other AWS services?
MySQL command format:
mysql --host=$RDSHOST --port=3306 --enable-cleartext-plugin --user=lambdauser --password=$TOKEN
Related
I want to connect to my MySQL database from my GKE pods using python using a private IP
I've done all the configurations, the connection is working inside a test pod through bash using
mysql -u root -p --host X.X.X.X --port 3306
But it doesn't work inside my Python app... maybe i'm missing something
Here is my current code
# initialize Connector object
connector = Connector(ip_type=IPTypes.PRIVATE)
# function to return the database connection object
def getconn():
conn = connector.connect(
INSTANCE_CONNECTION_NAME,
"pymysql",
user=DB_USER,
password=DB_PASS,
db=DB_NAME
)
return conn
# create connection pool with 'creator' argument to our connection object function
pool = sqlalchemy.create_engine(
"mysql+pymysql://",
creator=getconn,
)
I'm still getting these errors
aiohttp.client_exceptions.ClientResponseError: 403, message="Forbidden: Authenticated IAM principal does not seeem authorized to make API request. Verify 'Cloud SQL Admin API' is enabled within your GCP project and 'Cloud SQL Client' role has been granted to IAM principal.", url=URL('https://sqladmin.googleapis.com/sql/v1beta4/projects/manifest-altar-223913/instances/rapminerz-apps/connectSettings')
Check the below workaround :
Verify the Verify the Workload Identity
If not OK, please follow workload-identity troubleshooting to see what's wrong.
If the setup is OK, please just follow the error message "Verify 'Cloud SQL Admin API' is enabled within your GCP project and 'Cloud SQL Client' role has been granted to IAM principal."
You can search 'Cloud SQL Admin API' in the cloud console, make sure to Enable it.
For the Google Service Account, please grant the 'Cloud SQL Client' role to it.
Please go through the Cloud SQL Python Connector for more details.
I am running out of ideas.
I have created a Aurora Serverless RDS (Version 1) with Data API enabled. I now wish to execute SQL statements against it using the Data API (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html)
I have made a small test script using the provided guidelines (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html#data-api.calling:~:text=Calling%20the%20Data%20API%20from%20a%20Python%20application)
import boto3
session = boto3.Session(region_name="eu-central-1")
rds = session.client("rds-data")
secret = session.client("secretsmanager")
cluster_arn = "arn:aws:rds:eu-central-1:<accountID>:cluster:aurorapostgres"
secret_arn = "arn:aws:secretsmanager:eu-central-1:<accountID>:secret:dbsecret-xNMeQc"
secretvalue = secret.get_secret_value(
SecretId = secret_arn
)
print(secretvalue)
SQL = "SELECT * FROM pipelinedb.dataset"
res = rds.execute_statement(
resourceArn = cluster_arn,
secretArn = secret_arn,
database = "pipelinedb",
sql = SQL
)
print(res)
However I get the error message:
BadRequestException: An error occurred (BadRequestException) when calling the ExecuteStatement operation: FATAL: password authentication failed for user "bjarki"; SQLState: 28P01
I have verified the following:
Secret value is correct
Secret JSON structure is correctly following recommended structure (https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_secret_json_structure.html)
IAM user running the python script has Admin access to the account, and thus is privileged enough
Cluster is running in Public Subnets (internet gateways attached to route tables) and ACL and security groups are fully open.
The user "bjarki" is the master user and thus should have the required DB privileges to run the query
I am out of ideas on why this error is appearing - any good ideas?
Try this AWS tutorial that is located in the AWS Examples
Code Library. It shows how to use the AWS SDK for Python (Boto3) to create a web application that tracks work items in an Amazon Aurora database and emails reports by using Amazon Simple Email Service (Amazon SES). This example uses a front end built with React.js to interact with a Flask-RESTful Python backend.
Integrate a React.js web application with AWS services.
List, add, and update items in an Aurora table.
Send an email report of filtered work items by using Amazon SES.
Deploy and manage example resources with the included AWS CloudFormation script.
https://docs.aws.amazon.com/code-library/latest/ug/cross_RDSDataTracker_python_3_topic.html
Try running the CDK to properly setup the database too.
Once you successfully implemented this example, you wil get this front end with a Python backend.
I try to read or write from/to an AWS RDS Proxy with a postgres RDS as the endpoint.
The operation works with psql but fails on the same client with pg8000 or psycopg2 as client libraries in Python.
The operation works with with pg8000 and psycopg2 if I use the RDS directly as endpoint (without the RDS proxy).
sqlaclchemy/psycopg2 error message:
Feature not supported: RDS Proxy currently doesn’t support command-line options.
A minimal version of the code I use:
from sqlalchemy import create_engine
import os
from dotenv import load_dotenv
load_dotenv()
login_string = os.environ['login_string_proxy']
engine = create_engine(login_string, client_encoding="utf8", echo=True, connect_args={'options': '-csearch_path={}'.format("testing")})
engine.execute(f"INSERT INTO testing.mytable (product) VALUES ('123')")
pg8000: the place it stops / waits for something is in core.py:
def sock_read(b):
try:
return self._sock.read(b)
except OSError as e:
raise InterfaceError("network error on read") from e
A minimal version of the code I use:
import pg8000
import os
from dotenv import load_dotenv
load_dotenv()
db_connection = pg8000.connect(database=os.environ['database'], host=os.environ['host'], port=os.environ['port'], user=os.environ['user'], password=os.environ['password'])
db_connection.run(f"INSERT INTO mytable (data) VALUES ('data')")
db_connection.commit()
db_connection.close()
The logs in the RDS Proxy looks always normal for all the examples I mentioned - e.g.:
A new client connected from ...:60614.
Received Startup Message: [username="", database="", protocolMajorVersion=3, protocolMinorVersion=0, sslEnabled=false]
Proxy authentication with PostgreSQL native password authentication succeeded for user "" with TLS off.
A TCP connection was established from the proxy at ...:42795 to the database at ...:5432.
The new database connection successfully authenticated with TLS off.
I opened up all ports via security groups on the RDS and the RDS proxy and I used an EC2 inside the VPC.
I tried with autocommit on and off.
The 'command-line option" being referred to is the -csearch_path={}.
Remove that, and then once the connection is established execute set search_path = whatever as your first query.
This is a known issue that pg8000 can't connect to AWS RDS proxy (postgres). I did a PR https://github.com/tlocke/pg8000/pull/72 let see if Tony Locke (the father of pg8000) approves the change. ( if not you have to change the lines of the core.py https://github.com/tlocke/pg8000/pull/72/files )
self._write(FLUSH_MSG)
if (code != PASSWORD):
self._write(FLUSH_MSG)
I'm trying to connect my Databricks cluster to an existing SQL Server database using python. I will like to leverage the integrated authentication method. Getting error com.microsoft.sqlserver.jdbc.SQLServerException: This driver is not configured for integrated authentication.
jdbcHostname = "sampledb-dev.database.windows.net"
jdbcPort= 1433
jdbcDatabase = "sampledb-dev"
jdbcUrl = "jdbc:sqlserver://{0}:{1}; database={2}".format(jdbcHostname, jdbcPort, jdbcDatabase)
connectionProperties={
"integratedSecurity" : "true",
"driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver"
}
print(jdbcUrl)
query ="(SELECT * FROM TABLE1.Domain)"
domains = spark.read.jdbc(url = jdbcUrl, table = query, properties = connectionProperties)
display(domains)
You can't use integratedSecurity=true with an Azure PaaS database. IntegratedSecurity is an on-premise construct.
You need to use authentication=ActiveDirectoryIntegrated or authentication=ActiveDirectoryPassword, please see JDBC docs here:
https://learn.microsoft.com/en-us/sql/connect/jdbc/connecting-using-azure-active-directory-authentication?view=sql-server-ver15
You will also need your account to be user with appropriate permissions to that database which is synch'd to Azure AD. If you use multi-factor authentication, then that's not supported for JDBC and your admin will need to provide you with a non-MFA enabled account. You'll know if this is the case because you will get a WSTrust error when trying to connect.
Ok so I have a script that connects to a mssql db and i need to run as a service which I have already accomplished but when I run it as a service it overrides my credentials that I have put in when i connect to the db with the ad computer account.
It runs perfect when i run it on its own and not as a service.
My Connection String is:
'DRIVER={SQL Server};SERVER=MyServer;DATABASE=MyDB;UID=DOMAIN\myusername;PWD=A;Trusted_Connection=True'
The Error is:
Error: ('28000', "[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user 'DOMAIN\COMPUTERNAME')
Any Advice?
In the last project I worked on, I found that DRIVER={SQL Server};SERVER=SERVERNAME;DATABASE=DBName is sufficient to initiate a db connection in trusted mode.
If it still does not work, it is probably either
1) the account DEEPTHOUGHT on mssql server is not set up properly.
2) the runAs in the service is not set up properly (why error message mentions 'ComputerName' instead of 'DEEPTHOUGHT'?)
The following connection string will use Windows authentication, using the account running the service to authenticate with the database. Change the service account to one that has database access:
'DRIVER={SQL Server};SERVER=SERVERNAME;DATABASE=DBName;Trusted_Connection=yes'
To change service account:
Start -> Run -> services.msc
Right-click service -> Properties
Log On tab
OK/Apply to save changes