subprocess.check_output arguments syntax for querying a mysql database - python

I am looking into querying a mysql database through python, but I am not allowed to use any python connector for mysql. It seems that the subprocess.check_output function should be able to return a result, but so far I can't get any output.
In this database I have created a Schema (through MySQL Workbench) which I name here as "Mydatabase" as well as a table with some entries which I name here as "Mytable".
My code is:
==============
import subprocess
hostname = 'DESKTOP-XXXXXX' # taken from mysql status
port = '3306' # taken from mysql status
username = 'my_username_goes_here'
password = 'my_password_goes_here'
database = 'the_sql_database_name_goes_here'
result = subprocess.check_output(['mysql', hostname, port, username, database, 'SELECT *
FROM Mydatabase.Mytable;'])
print result
==============
When I run this I get:
"subprocess.CalledProcessError: Command '[....]' returned non-zero exit status 1
From python documentation I can't find what is the correct syntax for a connection with a mysql server so it is possible that something is odd with my syntax.
I would be extremely pleased if someone could tell me where to find the correct syntax for this to work.

Related

"Not a valid password" error when reflecting an encrypted Access database

i'm beginner in sqlalchemy, i want to reflect my table in database to object, but always return invalid password, even though the password is correct. I dont understand why this happend. When i try to inspect they return my table name, so my password, connection string or on create_engine is correct.
when my database have no password is fine i can reflect it to Object, that's so weird.
but why when i reflect database with password it's error, always return "Not a valid password" ??,
My MS. Access Tbl 1
My MS. Access Tbl 2
Error in Reflect but My Table name is returned
This is my Code
because I was curious I also made a test select data, and it turned out to be successful in retrieving the data
it's returned my data and success created connection
when i add some code for testing
I think all it's correct but why cannot reflect??, Please Help.
My Reference connection_string
My Reference SqlAlchemy Automap Reflect
I have just released sqlalchemy-access version 1.1.1 to address this issue. Note that as described in Getting Connected if you want to use a pass-through ODBC connection string with an encrypted database you need to supply the password in two places:
driver = "{Microsoft Access Driver (*.mdb, *.accdb)}"
db_path = r"C:\Users\Public\test\sqlalchemy-access\gord_test.accdb"
pwd = "tiger"
connection_string = (
f"DRIVER={driver};"
f"DBQ={db_path};"
f"PWD={pwd};"
f"ExtendedAnsiSQL=1;"
)
connection_uri = (
f"access+pyodbc://admin:{pwd}#/"
f"?odbc_connect={urllib.parse.quote_plus(connection_string)}"
)
engine = sa.create_engine(connection_uri)

Queries working in snowflake web UI but not consistently through the python sqlalchemy connector

I have a sqlalchemy connection setup to snowflake which works, as I can run some queries and get results out. The attempts to query are also logged in my user_query history.
My connection:
engine = create_engine(URL(
user, password, account, database, warehouse, role
))
connection = engine.connect()
However, most of the time my queries fail returning Operational Error (i.e. its a snowflake error) https://docs.sqlalchemy.org/en/13/errors.html#error-e3q8. But these same queries will run fine in the snowflake web UI.
For example if I run
test_query = 'SELECT * FROM TABLE DB1.SCHEMA1.TABLE1'
test = pd.read_sql(test_query, connection)
When I look at my query_history it shows the sqlalchemy query failing, then a second later the base query itself being run successfully. However I'm not sure where this output goes in the snowflake setup, and why its not transferring through my sqlalchemy connection. What I'm seeing...
Query = 'DESC TABLE /* sqlalchemy:_has_object */ "SELECT * FROM DB1"."SCHEMA1"."TABLE1"
Error code = 2003 Error message = SQL compilation error: Database
'"SELECT * FROM DB1" does not exist.
Then 1 second later, the query itself will run successfully, but not clear where this goes as it doesn't get sent over the connection.
Query = SELECT * FROM TABLE DB1.SCHEMA1.TABLE1
Any help much appreciated!
Thanks
You can try adding schema also here
engine = create_engine(URL(
account = '',
user = '',
password = '',
database = '',
schema = '',
warehouse = '',
role='',
))
connection = engine.connect()
It is very unlikely that the query is running in WebUI and fails with syntax error when connected via CLI or other modes.
Suggest you print the query which is via CLI or via a connector, run the same to WebUI and also note that from which role you're running the query.
Please share what is your finding.
The mentioned query (SELECT * FROM TABLE DB1.SCHEMA1.TABLE1) is not a snowflake supported SQL syntax.
Link here will help you more with details.
Hope this helps!

How to query a (Postgres) RDS DB through an AWS Jupyter Notebook?

I'm trying to query an RDS (Postgres) database through Python, more specifically a Jupyter Notebook. Overall, what I've been trying for now is:
import boto3
client = boto3.client('rds-data')
response = client.execute_sql(
awsSecretStoreArn='string',
database='string',
dbClusterOrInstanceArn='string',
schema='string',
sqlStatements='string'
)
The error I've been receiving is:
BadRequestException: An error occurred (BadRequestException) when calling the ExecuteSql operation: ERROR: invalid cluster id: arn:aws:rds:us-east-1:839600708595:db:zprime
In the end, it was much simpler than I thought, nothing fancy or specific. It was basically a solution I had used before when accessing one of my local DBs. Simply import a specific library for your database type (Postgres, MySQL, etc) and then connect to it in order to execute queries through python.
I don't know if it will be the best solution since making queries through python will probably be much slower than doing them directly, but it's what works for now.
import psycopg2
conn = psycopg2.connect(database = 'database_name',
user = 'user',
password = 'password',
host = 'host',
port = 'port')
cur = conn.cursor()
cur.execute('''
SELECT *
FROM table;
''')
cur.fetchall()

How do I connect Python to my Postgres Server?

I have been having major trouble connecting my python shell to my postgres. I am doing this on windows. I have downloaded psycopg2 and everything for this to process, however it still is not working.
import psycopg2
conn=psycopg2.connect("dbname = 'test' user ='postgres' host ='localhost' password = 'mypassword'")
It gives me an error telling me that the database "test" does not exist, however it does! If you guys have any advice at all on what I should test out, that would be amazing. Thank you!
You can layout connection parameters as a string and pass it to the connect() function as like:
conn = psycopg2.connect("dbname=test user=postgres password=postgres")
Or you can use a list of keyword arguments like
conn = psycopg2.connect(host="localhost",database="test", user="postgres", password="postgres")
If its still fails then you should check on PostgreSQL side. You should try to connect the db in question using command line and see if error re appears or not. if it appears then something is missing on DB server side.

python-mysql cursor.execute failing with access denied error

I have two machines: local_machine, server_machine. I have mysql server on server_machine and sftp server on local_machine. I am trying to send sritest.csv file (UTF-8) from local_machine to server_machine using python. These are the contents of sritest.csv:
1,2,3
I have the sql query saved in sritest.sql and these are the contents of the file:
LOAD DATA INFILE '{}'
INTO TABLE TESTBED_STAGING.test
COLUMNS TERMINATED BY ','
;
This is the python script I have now:
import MySQLdb
import os
import string
# Open database connection
db = MySQLdb.connect (host="1.2.3.4",port=3306,user="app_1",\
passwd="passwd",db="TESTBED_STAGING")
cursor=db.cursor()
#Query under testing
sql = open('sritest.sql','r').read()
print sql
l = os.listdir(".")
for file_name in l:
if file_name.endswith('sritest.csv'):
print 'the csv file we are reading is: '+file_name
#try:
cursor = db.cursor()
print 'filename is '+sql.format(file_name)
cursor.execute(sql.format(file_name))
db.commit()
'''
except Exception:
# Rollback in case there is any error
db.rollback()
print 'ERROR - So, rollback :( :( '
'''
# disconnect from server
db.close()
In the above script, I commented try,except so I can see the error where it breaks. Currently the code is breaking at cursor.execute(sql.format(file_name)) line with this error:
OperationalError: (1045, "Access denied for user 'app_1'#'%' (using password: YES)")
I have been playing around but not able to fix it. Any suggestions/ideas?
For starters, creating cursor at every loop is not a good idea. You've already created a cursor earlier, so you can remove the cursor declaration in the for loop.
Second, I think your error is due to lack of access on MySQL server at 1.2.3.4 remotely using user app_1. Try this on the server's MySQL console,
GRANT ALL PRIVILEGES ON TESTBED_STAGING.* TO 'app_1'#'%';
Lastly, try and avoid using print "line" notation and start switching to the print("line") notation for compatibility with Python 3.x
I figured out the answer and decided to leave this question open for those who might face the similar problem:
In the MySQL server (server_machine), make sure you do this after you start mysql:
mysql>grant all privileges on *.* to 'app_1'#'%' identified by 'passwd';
change LOAD DATA INFILE '{}' in sritest.sql to LOAD DATA LOCAL INFILE '{}'
In the python code, edit the MySQLdb.connect statement as:
db = MySQLdb.connect (host="1.2.3.4",port=3306,user="app_1",\
passwd="passwd",db="TESTBED_STAGING", local_infile=1)
All errors are eliminated and data is transferred.

Categories