Query Cloud SQL Database from Python - python

I want to insert data into a CloudSQL MySQL database from a local Python application, is this possible, if so how?
I have tried running the examples at the bottom of https://cloud.google.com/appengine/docs/python/cloud-sql/#Python_complete_python_example
db = MySQLdb.connect(unix_socket='/cloudsql/PROJECT-ID:INSTANCE-NAME', user='phil')
cursor = db.cursor()
cursor.execute('SHOW VARIABLES')
for r in cursor.fetchall():
webapp2.RequestHandler.response.write('%s\n' % str(r))
db.close()
However I get the error:
`Can't connect to local MySQL server through socket`

Of course it's possible, but that's the documentation for using it specifically from App Engine. Rather, you should use the docs for connecting from an external application - you'll need to configure access, then set mysqldb to connect via IP rather than a local socket.

Related

Attempting to establish a connection to Amazon Redshift from Python Script

I am trying to connect to a Amazon redshift table. I created the table using SQL and now I am writing a Python script to append a data frame to the database. I am unable to connect to the database and feel that I have something wrong with my syntax or something else. My code is below.
from sqlalchemy import create_engine
conn = create_engine('jdbc:redshift://username:password#localhost:port/db_name')
Here is the error I am getting.
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string
Thanks!
There are basically two options for connecting to Amazon Redshift using Python.
Option 1: JDBC Connection
This is a traditional connection to a database. The popular choice tends to be using psycopg2 to establish the connection, since Amazon Redshift resembles a PostgreSQL database. You can download specific JDBC drivers for Redshift.
This connection would require the Redshift database to be accessible to the computer making the query, and the Security Group would need to permit access on port 5439. If you are trying to connect from a computer on the Internet, the database would need to be in a Public Subnet and set to Publicly Accessible = Yes.
See: Establish a Python Redshift Connection: A Comprehensive Guide - Learn | Hevo
Option 2: Redshift Data API
You can directly query an Amazon Redshift database by using the Boto3 library for Python, including an execute_statement() call to query data and a get_statement_result() call to retrieve the results. This also works with IAM authentication rather than having to create additional 'database users'.
There is no need to configure Security Groups for this method, since the request is made to AWS (on the Internet). It also works with Redshift databases that are in private subnets.

Python script, query to GCP postgresql db from local machine?

I have a GCP workspace, complete with a Postgresql database. On a frequent basis, I need to insert and/or select rows from the db. I've been searching for a python script that will (A) connect to GCP, then (B) connect to the db, then (C) query a specific table. I'd prefer not to hard code my credentials if possible, that way I could share this script with others on my team, and provided that they were authorized users, it would run without any hiccups.
Does anyone have such a script?
I believe I just answered your question here: Access GCP Cloud SQL from AI notebook?
Using the Cloud SQL Python Connector which was mentioned in the other post, you can run a script that looks something like this to connect to your database and run a query:
# Copyright 2021 Google LLC.
# SPDX-License-Identifier: Apache-2.0
import os
from google.cloud.sql.connector import connector
# Connect to the database
conn = connector.connect(
os.getenv("INSTANCE_CONNECTION_NAME"),
"pg8000",
user=os.getenv("DB_USER"),
password=os.getenv("DB_PASSWORD"),
db=os.getenv("DB_NAME")
)
# Execute a query
cursor = conn.cursor()
cursor.execute("SELECT * from my_table")
# Fetch the results
result = cursor.fetchall()
# Do something with the results
for row in result:
print(row)
The instance connection name should be in the format project:region:instance. If you don't want to hard code database credentials, you can read them in from environment variables instead.

Connect to Linked Server via SQL Server pyodbc connection?

I can currently connect to my SQL Server and query any database I want to directly.
The problem is when I want to query a linked server. I cannot directly reference the linked servers name in the connect() method and I have to connect to a local database first and then run an OPENQUERY() against the linked server.
This seams like a odd work around. Is there a way to query the linked server directly (from my research you cannot connect directly to a linked server) or at least connect to the server without specifying a database where I can then run the OPENQUERY() for anything without having to first connect to a database?
Example of what I have to do currently:
import pyodbc
ex_value = "SELECT * FROM OPENQUERY(LinkedServerName,'SELECT * FROM LinkedServerName.SomeTable')"
# I have to connect to some local database on the server and cannot connect to linked server initially.
odbc_driver, server, db = '{ODBC Driver 17 for SQL Server}', 'MyServerName', 'LocalDatabase'
with pyodbc.connect(driver=odbc_driver, host=server, database=db, trusted_connection='yes') as conn:
conn.autocommit = False
cursor = conn.cursor()
cursor.execute(ex_value)
tables = cursor.fetchall()
for row in tables:
print('Row: {}'.format(row))
cursor.close()
As Sean mentioned, a linked server is just a reference to another server that's stored within the local server.
You do not need to manage 100+ user credentials though. If you have the users using Windows auth, and you have Kerberos working between the servers, the linked server can just impersonate you when it connects to the other server via the linked server definition.
Then you can use either 4 part names to refer to objects on the other server, or use OPENQUERY when you want more control over what gets executed where.
Finally, if they're both SQL Servers and both use the same collation, make sure you set the linked server option to say they are collation compatible. That can make a major difference to your linked server performance. I regularly see systems where that isn't set and it should be.

why should we set the local_infile=1 in sqlalchemy to load local file? Load file not allowed issue in sqlalchemy

I am using sqlalchemy to connect to MySQL database and found a strange behavior.
If I query
LOAD DATA LOCAL INFILE
'C:\\\\Temp\\\\JaydenW\\\\iata_processing\\\\icer\\\\rename\\\\ICER_2017-10-
12T09033
7Z023870.csv
It pops an error:
sqlalchemy.exc.InternalError: (pymysql.err.InternalError) (1148, u'The used
command is not allowed with this MySQL versi
on') [SQL: u"LOAD DATA LOCAL INFILE
'C:\\\\Temp\\\\JaydenW\\\\iata_processing\\\\icer\\\\rename\\\\ICER_2017-10-
12T090337Z023870.csv' INTO TABLE genie_etl.iata_icer_etl LINES TERMINATED BY
'\\n'
IGNORE 1 Lines (rtxt);"] (Background on this error at:
http://sqlalche.me/e/2j85)
And I find the reason is that:
I need to set the parameter as
args = "mysql+pymysql://"+username+":"+password+"#"+hostname+"/"+database+"?
local_infile=1"
If I use MySQL official connection library. I do not need to do so.
myConnection = MySQLdb.connect(host=hostname, user=username, passwd=password, db=database)
Can anyone help me to understand the difference between the two mechanisms?
The reason is that the mechanisms use different drivers.
In SQLAlchemy you appear to be using the pymysql engine, which uses the PyMySQL Connection class to create the DB connection. That one requires the user to explicitly pass the local_infile parameter if they want to use the LOAD DATA LOCAL command.
The other example uses MySQLdb, which is basically a wrapper around the MySQL C API (and to my knowledge not the official connection library; that would be MySQL Connector Python, which is also available on SQLAlchemy as mysqlconnector). This one apparently creates the connection in a way that the LOAD DATA LOCAL is enabled by default.

Overwrite a database

I have an online database and connect to it by using MySQLdb.
db = MySQLdb.connect(......)
cur = db.cursor()
cur.execute("SELECT * FROM YOUR_TABLE_NAME")
data = cur.fetchall()
Now, I want to write the whole database to my localhost (overwrite). Is there any way to do this?
Thanks
If I'm reading you correctly, you have two database servers, A and B (where A is a remote server and B is running on your local machine) and you want to copy a database from server A to server B?
In all honesty, if this is a one-off, consider using the mysqldump command-line tool, either directly or calling it from python.
If not, the last answer on http://bytes.com/topic/python/answers/24635-dump-table-data-mysqldb details the SQL needed to define a procedure to output tables and data, though this may well miss subtleties mysqldump does not

Categories