Azure Functions Python connect to Azure SQL DB - python

I created an Azure Function with Python and want to write some data into an Azure SQL DB.
If I run the code on my local machine via AZ Function Debugger, everything is working. But when I deploy everything to Azure, I only get a message that there is an error (no additional specific information).
I think this is related to the ODBC Driver?
I'm using the following code to connect and insert data:
with pyodbc.connect('DRIVER='+driver+';SERVER=tcp:'+server+';PORT='+port+';DATABASE='+database+';UID='+username+';PWD='+ password + ";Authentication=ActiveDirectoryPassword", timeout=120) as conn:
with conn.cursor() as cursor:
try:
cursor.execute(data)
except:
logging.error("Can't execute SQL Query!")
I use driver= '{ODBC Driver 17 for SQL Server}' as driver.
I assume that this is missing in Azure? How can this issue be fixed? What is the right approach to connect from Azure Functions to an Azure SQL DB via Python?

It seems the ODBC driver is included, it was just poorly documnented:
https://github.com/MicrosoftDocs/azure-docs/issues/54423
There is an example project here:
https://github.com/kevin808/azure-function-pyodbc-MI
The full tutorial including creating the system assigned identity can be found here:
https://techcommunity.microsoft.com/t5/apps-on-azure-blog/how-to-connect-azure-sql-database-from-python-function-app-using/ba-p/3035595
There is currently a SQL Extension under development but it only supports C# at the moment. Python has been requested as an ehancement so you could add your 👍 to the issue so that you could use bindings
https://github.com/Azure/azure-functions-sql-extension/issues/172

Related

Invalid object name gets returned for certain databases in MS SQL Server Management Studio

I can connect to databases in MS SQL Server Management Studio using my python script without issues (using pyodbc).
I then created a database called tempdb - see the db explorer pic referred to below. I did this by running a direct query in MS SQL Management Studio, and created a table (DepartmentTest)
Now, in my script if I do:
cursor.execute("SELECT * FROM DepartmentTest")
I get:
[Microsoft][ODBC SQL Server Driver][SQL Server]Invalid object name
Also tried: a few options for the above query such as:
dbo.DepartmentTest
[dbo].DepartmentTest
(instead of just DepartmentTest as above.)
I don't have this issue when connecting to the master database and accessing the tables in the master database
e.g. I can execute:
cursor.execute("SELECT * FROM MSreplication_options")
and I get back the contents. I.e. anything under System Tables works fine with the script.
In the explorer pic referred to below: I can access the tables circled in green. I can't access my table, circled in red.
I assume I am not correctly pointing to my table with the syntax I am using, but I'm not sure how to modify my query. (it's as though anything under System Tables is fine to access with my code.
(I did connect to the correct database name with my code)
Thanks and
regards
You should double check you’re connected to the right database, which is suppose to be ‘tempdb’. If you do that and try running the query again, it should work.
It seems as though the trusted connection to the server was the problem. Once I did a connection with user and password credentials, I was able to access that database.

Attempting to establish a connection to Amazon Redshift from Python Script

I am trying to connect to a Amazon redshift table. I created the table using SQL and now I am writing a Python script to append a data frame to the database. I am unable to connect to the database and feel that I have something wrong with my syntax or something else. My code is below.
from sqlalchemy import create_engine
conn = create_engine('jdbc:redshift://username:password#localhost:port/db_name')
Here is the error I am getting.
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string
Thanks!
There are basically two options for connecting to Amazon Redshift using Python.
Option 1: JDBC Connection
This is a traditional connection to a database. The popular choice tends to be using psycopg2 to establish the connection, since Amazon Redshift resembles a PostgreSQL database. You can download specific JDBC drivers for Redshift.
This connection would require the Redshift database to be accessible to the computer making the query, and the Security Group would need to permit access on port 5439. If you are trying to connect from a computer on the Internet, the database would need to be in a Public Subnet and set to Publicly Accessible = Yes.
See: Establish a Python Redshift Connection: A Comprehensive Guide - Learn | Hevo
Option 2: Redshift Data API
You can directly query an Amazon Redshift database by using the Boto3 library for Python, including an execute_statement() call to query data and a get_statement_result() call to retrieve the results. This also works with IAM authentication rather than having to create additional 'database users'.
There is no need to configure Security Groups for this method, since the request is made to AWS (on the Internet). It also works with Redshift databases that are in private subnets.

Python script, query to GCP postgresql db from local machine?

I have a GCP workspace, complete with a Postgresql database. On a frequent basis, I need to insert and/or select rows from the db. I've been searching for a python script that will (A) connect to GCP, then (B) connect to the db, then (C) query a specific table. I'd prefer not to hard code my credentials if possible, that way I could share this script with others on my team, and provided that they were authorized users, it would run without any hiccups.
Does anyone have such a script?
I believe I just answered your question here: Access GCP Cloud SQL from AI notebook?
Using the Cloud SQL Python Connector which was mentioned in the other post, you can run a script that looks something like this to connect to your database and run a query:
# Copyright 2021 Google LLC.
# SPDX-License-Identifier: Apache-2.0
import os
from google.cloud.sql.connector import connector
# Connect to the database
conn = connector.connect(
os.getenv("INSTANCE_CONNECTION_NAME"),
"pg8000",
user=os.getenv("DB_USER"),
password=os.getenv("DB_PASSWORD"),
db=os.getenv("DB_NAME")
)
# Execute a query
cursor = conn.cursor()
cursor.execute("SELECT * from my_table")
# Fetch the results
result = cursor.fetchall()
# Do something with the results
for row in result:
print(row)
The instance connection name should be in the format project:region:instance. If you don't want to hard code database credentials, you can read them in from environment variables instead.

Connecting to jTDS Microsoft server with SQLalchemy and Presto

I'm trying to connect to an oldschool jTDS ms server for a variety of different analysis tasks. Firstly just using Python with SQL alchemy, as well as using Tableau and Presto.
Focusing on SQL Alchemy first at the moment I'm getting an error of:
Data source name not found and no default driver specified
With this, based on this thread here Connecting to SQL Server 2012 using sqlalchemy and pyodbc
i.e,
import urllib
params = urllib.parse.quote_plus("DRIVER={FreeTDS};"
"SERVER=x-y.x.com;"
"DATABASE=;"
"UID=user;"
"PWD=password")
engine = sa.create_engine("mssql+pyodbc:///?odbc_connect={FreeTDS}".format(params))
Connecting works fine through Dbeaver, using a jTDS SQL Server (MSSQL) driver (which is labelled as legacy).
Curious as to how to resolve this issue, I'll keep researching away, but would appreciate any help.
I imagine there is an old drive on the internet I need to integrate into SQL Alchemy to begin with, and then perhaps migrating this data to something newer.
Appreciate your time

Python Connect to Oracle DB

I currently use PYODBC to connect to MS SQL Server and MYSQL, but now need to access an Oracle database as well.
I have Oracle SQL Developer installed on my work comp (but there doesn't seem to be a separate Net Manager client per other SO posts), which I can use to access the DB.
Ideally, I would run what I need to in python, but am having difficulties. As it stands, I have created a linked server object to the Oracle DB in a MS SQL Server DB as a work around, but this isn't ideal.
What do I need to do to get PYODBC (or substitute) to connect to Oracle? Thanks very kindly.
I ran into the same issue where I could connect to a database via Oracle SQL Developer but not via pyodbc. Someone else did most of the database setup, so I wasn't sure of the proper connection parameters. I'll run you through how I was able to connect on a Windows computer.
In the Start Menu I typed "odbc" and selected "Microsoft ODBC Administrator". Under the "System DSN" tab I found my DSN name (we'll call it myDSN) and corresponding driver (mine was "Oracle in OraClient11g_home2"). I also have to specify a username and password for my database so my connection line now looks like this:
cnxn = pyodbc.connect(driver='{Oracle in OraClient11g_home2}', dsn='myDSN', uid='HODOR', pwd='hodor')
Maybe at this point it will work for you, but I still wasn't able to connect. This computer is a mess of 32 and 64 bit drivers so I figured I was pointing to the wrong one. So once again into the Start Menu, where under All Programs I found a folder called "Oracle in OraClient11g_home2" and right under it, one called "Oracle in OraClient11g_home32Bit". I changed my connection line in Python to the following:
cnxn = pyodbc.connect(driver='{Oracle in OraClient11g_home32Bit}', dsn='myDSN', uid='HODOR', pwd='hodor')
And it connected.

Categories