testing locally managed indentity with python - python

I was trying to setup code using python to test the azure managed identity services and with C# I can able to test the code locally. Is there any way to test the python code locally?
Enabled managed identity in azure appservice
Added the application user(appservice) in azure SQL server and gave permissions.
this is my sample python code to connect to azure sql with managed identity
conn = db.connect('Driver={ODBC Driver 17 for SQL Server};'
'Server=testdb.database.windows.net;'
'Database=studentdb;'
'Authentication=ActiveDirectoryIntegrated;'
)
query = pd.read_sql_query('SELECT * FROM STUDENT', conn)
frame = pd.DataFrame(query)
return func.HttpResponse(frame.to_json(orient="index"), status_code=200)
Can anyone help me to test this code locally? as i do not have permissions on azure to deploy this code and test.

You can use this Moto library which allow you to test services. You can run the Lambda functions in the same way you would run python script.
if __name__ == "__main__":
event = []
context = []
lambda_handler(event, context)
If you are in a virtual environment then this will ensure that all the required dependencies installed properly for the lambda function with correct python function.

If you check this document from Microsoft, then you will find that -
Managed Identity cannot be used to authenticate locally-running applications. Your application must be deployed to an Azure service that supports Managed Identity.
It's important to understand that Managed Identity feature in Azure is ONLY relevant when, in this case, the App Service is deployed.
As an alternative you can use DefaultAzureCredential() from the Azure.Identity library which is compatible both when running locally and for the deployed web app. You can read more about how Azure.Identity works from the official docs.
Most of the time we use Azure MSI to connect Azure SQL in Azure function with python. We can can use Azure MSI to get Azure AD access token then you can use the token to connect Azure SQL.
Once you enabled system assigned identity on your Azure Web App and gave SQL permissions, you can get access to the database directly from python as shown in the snippet below.
import pyodbc
from logzero import loggerwith pyodbc.connect(
"Driver=" + driver + ";Server=" + server + ";PORT=1433;Database=" + database
+ ";Authentication=ActiveDirectoryMsi",
) as conn:
logger.info("Successful connection to database")
with conn.cursor() as cursor:
cursor.execute(“select ##version")).fetchall()
Following are the parameters used above:
Driver: We should use : “{ODBC Driver 17 for SQL Server}”
Server: The sql server on which is your database
Database: The name of your database
Authentication: To specify the connection method “ActiveDirectoryMsi”
Check the SQL database access with managed identity from Azure Web App document and Configure your local Python dev environment for Azure document for more information.

Related

How to connect from GKE to Cloud SQL using Python and Private IP

I want to connect to my MySQL database from my GKE pods using python using a private IP
I've done all the configurations, the connection is working inside a test pod through bash using
mysql -u root -p --host X.X.X.X --port 3306
But it doesn't work inside my Python app... maybe i'm missing something
Here is my current code
# initialize Connector object
connector = Connector(ip_type=IPTypes.PRIVATE)
# function to return the database connection object
def getconn():
conn = connector.connect(
INSTANCE_CONNECTION_NAME,
"pymysql",
user=DB_USER,
password=DB_PASS,
db=DB_NAME
)
return conn
# create connection pool with 'creator' argument to our connection object function
pool = sqlalchemy.create_engine(
"mysql+pymysql://",
creator=getconn,
)
I'm still getting these errors
aiohttp.client_exceptions.ClientResponseError: 403, message="Forbidden: Authenticated IAM principal does not seeem authorized to make API request. Verify 'Cloud SQL Admin API' is enabled within your GCP project and 'Cloud SQL Client' role has been granted to IAM principal.", url=URL('https://sqladmin.googleapis.com/sql/v1beta4/projects/manifest-altar-223913/instances/rapminerz-apps/connectSettings')
Check the below workaround :
Verify the Verify the Workload Identity
If not OK, please follow workload-identity troubleshooting to see what's wrong.
If the setup is OK, please just follow the error message "Verify 'Cloud SQL Admin API' is enabled within your GCP project and 'Cloud SQL Client' role has been granted to IAM principal."
You can search 'Cloud SQL Admin API' in the cloud console, make sure to Enable it.
For the Google Service Account, please grant the 'Cloud SQL Client' role to it.
Please go through the Cloud SQL Python Connector for more details.

Python: AWS Aurora Serverless Data API: password authentication failed for user

I am running out of ideas.
I have created a Aurora Serverless RDS (Version 1) with Data API enabled. I now wish to execute SQL statements against it using the Data API (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html)
I have made a small test script using the provided guidelines (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html#data-api.calling:~:text=Calling%20the%20Data%20API%20from%20a%20Python%20application)
import boto3
session = boto3.Session(region_name="eu-central-1")
rds = session.client("rds-data")
secret = session.client("secretsmanager")
cluster_arn = "arn:aws:rds:eu-central-1:<accountID>:cluster:aurorapostgres"
secret_arn = "arn:aws:secretsmanager:eu-central-1:<accountID>:secret:dbsecret-xNMeQc"
secretvalue = secret.get_secret_value(
SecretId = secret_arn
)
print(secretvalue)
SQL = "SELECT * FROM pipelinedb.dataset"
res = rds.execute_statement(
resourceArn = cluster_arn,
secretArn = secret_arn,
database = "pipelinedb",
sql = SQL
)
print(res)
However I get the error message:
BadRequestException: An error occurred (BadRequestException) when calling the ExecuteStatement operation: FATAL: password authentication failed for user "bjarki"; SQLState: 28P01
I have verified the following:
Secret value is correct
Secret JSON structure is correctly following recommended structure (https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_secret_json_structure.html)
IAM user running the python script has Admin access to the account, and thus is privileged enough
Cluster is running in Public Subnets (internet gateways attached to route tables) and ACL and security groups are fully open.
The user "bjarki" is the master user and thus should have the required DB privileges to run the query
I am out of ideas on why this error is appearing - any good ideas?
Try this AWS tutorial that is located in the AWS Examples
Code Library. It shows how to use the AWS SDK for Python (Boto3) to create a web application that tracks work items in an Amazon Aurora database and emails reports by using Amazon Simple Email Service (Amazon SES). This example uses a front end built with React.js to interact with a Flask-RESTful Python backend.
Integrate a React.js web application with AWS services.
List, add, and update items in an Aurora table.
Send an email report of filtered work items by using Amazon SES.
Deploy and manage example resources with the included AWS CloudFormation script.
https://docs.aws.amazon.com/code-library/latest/ug/cross_RDSDataTracker_python_3_topic.html
Try running the CDK to properly setup the database too.
Once you successfully implemented this example, you wil get this front end with a Python backend.

Python connection to google big query using ADC

I am trying to get data from Google big query table using python. I dont have a service account access,but i have individual access to bigquery using gcloud. i have application default credentials Json file. I need to how to make a connection to bigquery usinG ADC.
code snippet:
from google.cloud import bigquery
conn=bigquery.Client()
query="select * from my_data.test1"
conn.query(query)
When i run above code snippet i am getting error saying:
NewConnectionError: <urllib3.connection.HttpsConnection object at 0x83dh46bdu640>: Failed to establish a new connection:[Error -2] Name or Service not known
Note: ENVIRONMENT Variable GOOGLE APPLICATION CREDENTIALS is not set and empty
Your script works for me because I authenticated using end user credentials from Google Cloud SDK, once you have the SDK installed you can simply run:
gcloud auth application-default login
The credentials from your json file are not being passed to the bigquery client, e.g.:
client = bigquery.Client(project=project, credentials=credentials)
to set that up you can follow these steps: https://cloud.google.com/bigquery/docs/authentication/end-user-installed
or this thread has some good details on setting the credentials environment variable: Setting GOOGLE_APPLICATION_CREDENTIALS for BigQuery Python CLI

Databricks SQL Server connection using integrated authentication

I'm trying to connect my Databricks cluster to an existing SQL Server database using python. I will like to leverage the integrated authentication method. Getting error com.microsoft.sqlserver.jdbc.SQLServerException: This driver is not configured for integrated authentication.
jdbcHostname = "sampledb-dev.database.windows.net"
jdbcPort= 1433
jdbcDatabase = "sampledb-dev"
jdbcUrl = "jdbc:sqlserver://{0}:{1}; database={2}".format(jdbcHostname, jdbcPort, jdbcDatabase)
connectionProperties={
"integratedSecurity" : "true",
"driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver"
}
print(jdbcUrl)
query ="(SELECT * FROM TABLE1.Domain)"
domains = spark.read.jdbc(url = jdbcUrl, table = query, properties = connectionProperties)
display(domains)
You can't use integratedSecurity=true with an Azure PaaS database. IntegratedSecurity is an on-premise construct.
You need to use authentication=ActiveDirectoryIntegrated or authentication=ActiveDirectoryPassword, please see JDBC docs here:
https://learn.microsoft.com/en-us/sql/connect/jdbc/connecting-using-azure-active-directory-authentication?view=sql-server-ver15
You will also need your account to be user with appropriate permissions to that database which is synch'd to Azure AD. If you use multi-factor authentication, then that's not supported for JDBC and your admin will need to provide you with a non-MFA enabled account. You'll know if this is the case because you will get a WSTrust error when trying to connect.

internal error when connecting to google cloud SQL

I designed a simple website using Flask and my goal was to deploy it on Google App engine. I started working on it locally and used google cloud sql for the database. I used google_cloud_proxy to open the port 3306 to interact with my GC SQL instance and it works fine locally... this is the way I'm connecting my application to GC SQL:
I have a app.yaml file which I've defined my Global Variables in it:
env_variables:
CLOUDSQL_SERVER = '127.0.0.1'
CLOUDSQL_CONNECTION_NAME = "myProjectName:us-central1:project"`
CLOUDSQL_USER = "user"
CLOUDSQL_PASSWORD = "myPassword"
CLOUDSQL_PORT = 3306
CLOUDSQL_DATABASE = "database"
and from my local machine I do:
db = MySQLdb.connect(CLOUDSQL_SERVER,CLOUDSQL_USER,CLOUDSQL_PASSWORD,CLOUDSQL_DATABASE,CLOUDSQL_PORT)
and if I want to get connected on App Engine, I do:
cloudsql_unix_socket = os.path.join('/cloudsql', CLOUDSQL_CONNECTION_NAME)
db = MySQLdb.connect(unix_socket=cloudsql_unix_socket,user=CLOUDSQL_USER,passwd=CLOUDSQL_PASSWORD,db=CLOUDSQL_DATABASE)
the static part of the website is running but for example, when I want to login with a username and password which is stored in GC SQL, I receive an internal error.
I tried another way... I started a compute engine, defined my global variables in config.py, installed flask, mysqldb and everything needed to start my application. I also used cloud_sql_proxy on that compute engine and I tried this syntax to connect to GC SQL instance:
db = MySQLdb.connect(CLOUDSQL_SERVER,CLOUDSQL_USER,CLOUDSQL_PASSWORD,CLOUDSQL_DATABASE,CLOUDSQL_PORT)
but it had the same problem. I don't think that it's the permission issue as I defined my compute engine's ip address in the authorized network part of GC SQL and in I AM & ADMIN part, the myprojectname#appspot.gserviceaccount.com has the Editor role!
can anyone help me out where the problem is?
ALright! I solved the problem. I followed the Google cloud's documentation but I had problems.I added a simple '/' in:
cloudsql_unix_socket = os.path.join('/cloudsql', CLOUDSQL_CONNECTION_NAME)
instead of '/cloudsql' it should be '/cloudsql/'
I know it's weird because os.path.join must add '/' to the path but for strange reasons which I don't know, it wasn't doing so.

Categories