How to connect Cloud SQL Server via external python program? - python

So I am trying to communicate to a Google Cloud SQL Server that I have created with an external python program that I have written in VS Code but I don't know where to begin. Any help will be useful.

I'd recommend using the Cloud SQL Python Connector to manage your connections to Cloud SQL. It supports the pytds driver and should help resolve your troubles for connecting to a SQL Server instance from a Python application.
from google.cloud.sql.connector import connector
import sqlalchemy
# configure Cloud SQL Python Connector properties
def getconn() ->:
conn = connector.connect(
"PROJECT:REGION:INSTANCE",
"pytds",
user="YOUR_USER",
password="YOUR_PASSWORD",
db="YOUR_DB"
)
return conn
# create connection pool to re-use connections
pool = sqlalchemy.create_engine(
"mssql+pytds://localhost",
creator=getconn,
)
# query or insert into Cloud SQL database
with pool.connect() as db_conn:
# query database
result = db_conn.execute("SELECT * from my_table").fetchall()
# Do something with the results
for row in result:
print(row)
For more detailed examples and additional params refer to the README of the repository.

I think you can be inspired by this :Python django
"Run the app on your local computer"

Related

Insert data from buckets to tables of Cloud SQL for SQL Server instance using local python

I am trying to insert data (txt files) from buckets to Cloud SQL for SQL Server instance using my local python.
I can connect to the buckets but I have trouble connecting to the SQL Server instance and make a insert data from the txt files
import os, sys
from google.cloud import storage
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'ServiceKey_GooglCloud.json'
storage_client = storage.Client()
mybucket= storage_client.get_bucket('my-bucket')
Do you know a way in python to insert data in the tables of the SQL Server instance?
You can use the Cloud SQL Python connector :
Github Cloud SQL Python connector
PyPi Cloud SQL Python connector
According the documentation, the Cloud SQL Python Connector is a package to be used alongside a database driver. Currently supported drivers are:
pymysql (MySQL)
pg8000 (PostgreSQL)
asyncpg (PostgreSQL)
pytds (SQL Server)
The Python package can be installed for SQL server with pip :
pip install "cloud-sql-python-connector[pytds]"
Example
# insert statement
insert_stmt = sqlalchemy.text(
"INSERT INTO my_table (id, title) VALUES (:id, :title)",
)
with pool.connect() as db_conn:
# insert into database
db_conn.execute(insert_stmt, id="book1", title="Book One")
# query database
result = db_conn.execute("SELECT * from my_table").fetchall()
# Do something with the results
for row in result:
print(row)

Issue Connecting to Mssql Instance From Google Cloud Function

I am trying to connect to a mssql instance in cloud sql in a cloud function. I have gone through the necessary steps of setting up a private IP, serverless VPC connector, and connecting my function to the VPC. I have been able to connect to the instance in nodejs but python suits my current needs more. The error I'm getting in the logs is:
pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 17 for SQL Server'
From all the examples I have read it does not appear that you need to import them or anything.
This is my process of connecting and executing a simple request.
import sqlalchemy
import pyodbc
def hello_world(request):
# connect_simple()
db = connect_tcp_socket()
a = execute_request(db)
return a
def connect_tcp_socket() -> sqlalchemy.engine.base.Engine:
db_host = 'my_private_ip'
db_user = 'my_db_user'
db_pass = 'my_db_pass'
db_name = 'my_db_name'
db_port = 'my_db_port'
connection_string = 'DRIVER={ODBC Driver 17 for SQL Server};SERVER='+db_host+';PORT='+db_port+'DATABASE='+db_name+';UID='+db_user+';PWD='+ db_pass+';Encrypt=no'
connection_url = sqlalchemy.engine.url.URL.create("mssql+pyodbc", query={"odbc_connect": connection_string})
engine = sqlalchemy.create_engine(
connection_url
)
def execute_request(db: sqlalchemy.engine.base.Engine):
print('ok')
with db.connect() as conn:
result = conn.execute('SELECT ##VERSION')
barray= []
for row in result:
barray.append(row)
return barray
I'd recommend using the Cloud SQL Python Connector to connect to Cloud SQL from Python as it will not require the ODBC driver and is much easier to use within Cloud Functions/Cloud Run.
Just replace your connect_tcp_socket with the below connect_with_connector function.
from google.cloud.sql.connector import Connector, IPTypes
import pytds
import sqlalchemy
def connect_with_connector() -> sqlalchemy.engine.base.Engine:
def getconn() -> pytds.Connection:
with Connector() as connector:
conn = connector.connect(
"project-id:region:instance-name", # Cloud SQL connection name
"pytds",
user="my-user",
password="my-password",
db="my-database",
ip_type=IPTypes.PRIVATE
)
return conn
engine = sqlalchemy.create_engine(
"mssql+pytds://localhost",
creator=getconn,
)
return engine
You can find a code sample for the Python Connector similar to the one you are using for establishing a TCP connection.
Note: Pytds driver is not super great with error handling. If you see the OSError: [Errno 9] Bad file descriptor error it usually means your database user is missing proper permissions and should grant them the necessary grants from a root user.
Your requirements.txt should include the following:
cloud-sql-python-connector
SQLAlchemy
python-tds
sqlalchemy-pytds
There is also an interactive getting started Colab Notebook that will walk you through using the Python Connector without you needing to change a single line of code!
It makes connecting to Cloud SQL both easy and secure from Cloud Functions.

How to connect to a SQL server linked server in Python

Normally, when trying to connect to a SQL sever DB in Python, I use the pyodbc package like this:
import pyodbc
conn = pyodbc.connect("Driver={SQL Server};"
"Server=<server-ip>;"
"Database=<DB-name>;"
"UID=<user-name>;"
"PWD=<password>;"
"Trusted_Connection=yes;"
)
However, I don't know how to connect to a linked server in Python. If my linked server is called linked-server and has a DB called linked-DB for example; I have tried the same connection string as above, and changing the database name like this: "Database=<linked-server>.<linked-DB>;", since that's how I query the linked server DB in SSMS. But this doesn't work in Python.
Thank you very much for your help.

query Kerberized Hive with SQL Alchemy

I'm trying to query a Kerberized Hive cluster with SQL Alchemy. I'm able to submit queries using pyhs2 which confirms that it's possible to connect and query Hive when authenticated by Kerberos:
import pyhs2
with pyhs2.connect(host='hadoop01.woolford.io',
port=10500,
authMechanism='KERBEROS') as conn:
with conn.cursor() as cur:
cur.execute('SELECT * FROM default.mytable')
records = cur.fetchall()
# etc ...
I notice that Airbnb's Airflow uses SQL Alchemy and can connect to Kerberized Hive and so I imagine it's possible to do something like this:
engine = create_engine('hive://hadoop01.woolford.io:10500/default', connect_args={'?': '?'})
connection = engine.connect()
connection.execute("SELECT * FROM default.mytable")
# etc ...
I'm not sure what parameters should be set in the connect_args dictionary. Can you see what needs to be added to make this work (e.g. Kerberos service name, realm, etc.)?
update:
Under the hood SQL Alchemy is using PyHive to connect to Hive. The current version of PyHive, v0.2.1, doesn't support Kerberos.
I notice that someone from Yahoo created a pull request that provides support for Kerberos. This PR has not yet been merged/released and so I just copied the code from the PR into /usr/lib/python2.7/site-packages/pyhive/hive.py on the Superset server created a connection like this:
engine = create_engine('hive://hadoop01:10500', connect_args={'auth': 'KERBEROS', 'kerberos_service_name': 'hive'})
Hopefully, the maintainer of PyHive will merge/release the support for Kerberos.
install these libraries
sasl
thrift
thrift-sasl
PyHive
get your kerberos ticket and then;
engine = create_engine('hive://HOST:10500/DB_NAME',
connect_args={'auth': 'KERBEROS', 'kerberos_service_name': 'hive'})
ps: /DB_NAME is optional

Google App Engine and Cloud SQL: Lost connection to MySQL server at 'reading initial communication packet' SQL 2nd Gen

I'm getting an error similar to other posts in this subject.
I tried switching from 1st gen to 2nd gen SQL server (both on us-central1), but it still doesn't work.
I copied my CLOUDSQL_PROJECT from the url on the top of my project.
I copied my CLOUDSQL_INSTANCE from the proprieties part in the SQL page.
In my main.py, I'm trying to run Google sample code, and it doesn't work (locally it does, of course):
if os.getenv('SERVER_SOFTWARE', '').startswith('Google App Engine/'):
db = MySQLdb.connect(
unix_socket='/cloudsql/{}:{}'.format(
CLOUDSQL_PROJECT,
CLOUDSQL_INSTANCE),
user=user,passwd=password)
# When running locally, you can either connect to a local running
# MySQL instance, or connect to your Cloud SQL instance over TCP.
else:
db = MySQLdb.connect(host=host,user=user,passwd=password)
cursor = db.cursor()
cursor.execute('SHOW VARIABLES')
for r in cursor.fetchall():
self.response.write('{}\n'.format(r))
The documentation is slightly outdated. You should be able to always use the "Instance connection name" property from the SQL properties page to construct the unix socket path; just append that value after the "/cloudsql/" prefix.
For second generation, the connection format is project:region:name. In your example, it maps to "hello-world-123:us-central1:sqlsomething3", and the unix socket path is "/cloudsql/hello-world-123:us-central1:sqlsomething3".

Categories