The following logic works with the mysqldb module (see python mysqldb multiple cursors for one connection), but I am getting the following error with mysql.connector on cursor2.execute(sql)
"Unread result found."
I realize that I can use a join to combine these 2 simple sql statements and avoid the need for a second cursor, but my real world example is more complex and requires a second sql statement.
Assuming I need to execute 2 separate sql statements (1 for the loop and 1 inside the loop), how should this be done with the mysql.connector module?
import datetime
import mysql.connector
db = mysql.connector.connect(user='alan', password='please', host='machine1', database='mydb')
cursor1 = db.cursor()
cursor2 = db.cursor()
sql = """
SELECT userid,
username,
date
FROM user
WHERE date BETWEEN %s AND %s
"""
start_date = datetime.date(1999, 1, 1)
end_date = datetime.date(2014, 12, 31)
cursor1.execute(sql, (start_date, end_date))
for (userid, username, date) in cursor1:
sql = """
select count(*)
from request
where assigned = '%s'
""" % (userid)
cursor2.execute(sql)
requestcount = cursor2.fetchone()[0]
print userid, requestcount
cursor2.close()
cursor1.close()
db.close()
This mysqldb version works just fine:
import datetime
import MySQLdb
db = MySQLdb.connect(user='alan', passwd='please', host='machine1', db='mydb')
cursor1 = db.cursor()
cursor2 = db.cursor()
sql = """
SELECT userid,
username,
date
FROM user
WHERE date BETWEEN %s AND %s
"""
start_date = datetime.date(1999, 1, 1)
end_date = datetime.date(2014, 12, 31)
cursor1.execute(sql, (start_date, end_date))
for (userid, username, date) in cursor1:
sql = """
select count(*)
from request
where assigned = '%s'
""" % (userid)
cursor2.execute(sql)
requestcount = cursor2.fetchone()[0]
print userid, requestcount
cursor2.close()
cursor1.close()
db.close()
MySQL Connector/Python is, by default, non-buffering. This means the data is not fetched automatically and you need to 'consume' all rows. (It works with MySQLdb because that driver is buffering by default.)
Using Connector/Python you have to use the buffered-argument set to True for cursor you use as iterator. In the OP's question, this would be cursor1:
cursor1 = db.cursor(buffered=True)
cursor2 = db.cursor()
You can also use buffered=True as connection argument to make all cursor buffering instantiated by this connection buffering.
Related
def LiraRateApiCall():
R = requests.get(url)
timestamp = R.json()['buy'][-1][0]/1000
format_date = '%d/%m/%y'
date = datetime.fromtimestamp(timestamp)
buyRate = R.json()['buy'][-1][1]
print(date.strftime(format_date))
print(buyRate)
#ADDDING TO SQL SERVER
conn = odbc.connect("Driver={ODBC Driver 17 for SQL Server};"
'Server=LAPTOP-36NUUO53\SQLEXPRESS;'
'Database=test;'
'Trusted_connection=yes;')
cursor = conn.cursor()
cursor.execute('''
INSERT INTO Data_table (Time1,Price)
VALUES
('date',140),
('Date2' , 142)
''')
conn.commit()
cursor.execute('SELECT * FROM Data_table')
for i in cursor:
print(i)
How do i pass the variables date and buy rate to the table instead of putting in values liek i did (i put in'date' , 140 for example but i want to pass variables not specific values)
You'll need to check the driver version that you're using, but what you're looking for is the concept of bind variables. I'd suggest you look into the concept of fast_executemany as well - that should help speed things up. I've edited your code to show how bind variables typically work (using the (?, ?) SQL syntax), but there are other formats out there.
def LiraRateApiCall():
R = requests.get(url)
timestamp = R.json()['buy'][-1][0]/1000
format_date = '%d/%m/%y'
date = datetime.fromtimestamp(timestamp)
buyRate = R.json()['buy'][-1][1]
print(date.strftime(format_date))
print(buyRate)
#ADDDING TO SQL SERVER
conn = odbc.connect("Driver={ODBC Driver 17 for SQL Server};"
'Server=LAPTOP-36NUUO53\SQLEXPRESS;'
'Database=test;'
'Trusted_connection=yes;')
cursor = conn.cursor()
#Setup data
data = [('date',140), ('Date2' , 142)]
#Use executemany since we have a list
cursor.executemany('''
INSERT INTO Data_table (Time1,Price)
VALUES (?, ?)
''', data)
conn.commit()
cursor.execute('SELECT * FROM Data_table')
for i in cursor:
print(i)
I dont understand at all your question
If you want to pass the variables:
insert_sql = 'INSERT INTO Data_table (Time1,Price) VALUES (' + date + ',' + str(buyRate) + ')'
cursor.execute(insert_sql)
If you want to do dynamic Insert:
You can only insert knowing the values or by inserting with a select
INSERT INTO table
SELECT * FROM tableAux
WHERE condition;
That or you could iterate through the fields you have in a table, extract them and compare it to your variables to do a dynamic insert.
With this select you can extract the columns.
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = N'table1'
I'm trying to store a mySQL query result in a pandas DataFrame using pymysql and am running into errors building the dataframe. Found a similar question here and here, but it looks like there are pymysql-specific errors being thrown:
import pandas as pd
import datetime
import pymysql
# dummy values
connection = pymysql.connect(user='username', password='password', databse='database_name', host='host')
start_date = datetime.datetime(2017,11,15)
end_date = datetime.datetime(2017,11,16)
try:
with connection.cursor() as cursor:
query = "SELECT * FROM orders WHERE date_time BETWEEN %s AND %s"
cursor.execute(query, (start_date, end_date))
df = pd.DataFrame(data=cursor.fetchall(), index = None, columns = cursor.keys())
finally:
connection.close()
returns: AttributeError: 'Cursor' object has no attribute 'keys'
If I drop the index and columns arguments:
try:
with connection.cursor() as cursor:
query = "SELECT * FROM orders WHERE date_time BETWEEN %s AND %s"
cursor.execute(query, (start_date, end_date))
df = pd.DataFrame(cursor.fetchall())
finally:
connection.close()
returns ValueError: DataFrame constructor not properly called!
Thanks in advance!
Use Pandas.read_sql() for this:
query = "SELECT * FROM orders WHERE date_time BETWEEN ? AND ?"
df = pd.read_sql(query, connection, params=(start_date, end_date))
Thank you for your suggestion to use pandas.read_sql(). It works with executing a stored procedure as well! I tested it in MSSQL 2017 environment.
Below is an example (I hope it helps others):
def database_query_to_df(connection, stored_proc, start_date, end_date):
# Define a query
query ="SET NOCOUNT ON; EXEC " + stored_proc + " ?, ? " + "; SET NOCOUNT OFF"
# Pass the parameters to the query, execute it, and store the results in a data frame
df = pd.read_sql(query, connection, params=(start_date, end_date))
return df
Try This:
import pandas as pd
import pymysql
mysql_connection = pymysql.connect(host='localhost', user='root', password='', db='test', charset='utf8')
sql = "SELECT * FROM `brands`"
df = pd.read_sql(sql, mysql_connection, index_col='brand_id')
print(df)
I have a part in my python script that I need to insert some data into a table on a mysql database example below:
insert_data = "INSERT into test (test_date,test1,test2) values (%s,%s,%s)"
cur.execute(insert_data,(test_date,test1,test2))
db.commit()
db.close()
I have a couple of questions what is incorrect with this syntax and how is possible to change the VALUES to timestamp instead of %s for string? Note the column names in the database are the same as the data stored in the variables in my script.
THanks
try this:
import MySQLdb
import time
import datetime
ts = time.time()
timestamp = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')
conn = MySQLdb.connect(host= "localhost",
user="root",
passwd="newpassword",
db="db1")
x = conn.cursor()
try:
x.execute("""INSERT into test (test_date,test1,test2) values(%s,%s,%s)""",(timestamp,test1,test2))
conn.commit()
except:
conn.rollback()
conn.close()
Timestamp creating can be done in one line, no need to use time.time(), just:
from datetime import datetime
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
Simply use the database NOW() function, e.g.
timestamp="NOW()"
insert_data = "INSERT into test (test_date,test1,test2) values (%s,%s,%s)"
cur.execute(insert_data,(test_date,test1,test2,timestamp))
db.commit()
db.close()
I'm using pyodbc to query to an SQL Server database
import datetime
import pyodbc
conn = pyodbc.connect("Driver={SQL Server};Server='dbserver',Database='db',
TrustedConnection=Yes")
cursor = conn.cursor()
ratings = ("PG-13", "PG", "G")
st_dt = datetime(2010, 1, 1)
end_dt = datetime(2010, 12, 31)
cursor.execute("""Select title, director, producer From movies
Where rating In ? And release_dt Between ? And ?""",
ratings, str(st_dt), str(end_dt))
but am receiving the error below. Does the tuple parameter need to be handled in a different way? Is there a better way to structure this query?
('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Line 9:
Incorrect syntax near '#P1'. (170) (SQLExecDirectW);
[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]
Statement(s) could not be prepared. (8180)")
UPDATE:
I was able to get this query to work using the string formatting operator, which isn't ideal as it introduces security concerns.
import datetime
import pyodbc
conn = pyodbc.connect("Driver={SQL Server};Server='dbserver',Database='db',
TrustedConnection=Yes")
cursor = conn.cursor()
ratings = ("PG-13", "PG", "G")
st_dt = datetime(2010, 1, 1)
end_dt = datetime(2010, 12, 31)
cursor.execute("""Select title, director, producer From movies
Where rating In %s And release_dt Between '%s' And '%s'""" %
(ratings, st_dt, end_dt))
To expand on Larry's second option - dynamically creating a parameterized string, I used the following successfully:
placeholders = ",".join("?" * len(code_list))
sql = "delete from dbo.Results where RESULT_ID = ? AND CODE IN (%s)" % placeholders
params = [result_id]
params.extend(code_list)
cursor.execute(sql, params)
Gives the following SQL with the appropriate parameters:
delete from dbo.Results where RESULT_ID = ? AND CODE IN (?,?,?)
You cannot parameterize multiple values in an IN () clause using a single string parameter. The only way to accomplish that is:
String substitution (as you did).
Build a parameterized query in the form IN (?, ?, . . ., ?) and then pass in a separate parameter for each place holder. I'm not an expert at Python to ODBC but I imagine that this is particularly easy to do in a language like Python. This is safer because you get the full value of parameterization.
To expand on Larry and geographika's answers:
ratings = ('PG-13', 'PG', 'G')
st_dt = datetime(2010, 1, 1)
end_dt = datetime(2010, 12, 31)
placeholders = ', '.join('?' * len(ratings))
vars = (*ratings, st_dt, end_dt)
query = '''
select title, director, producer
from movies
where rating in (%s)
and release_dt between ? and ?
''' % placeholders
cursor.execute(query, vars)
With the placeholder, this will return a query of:
select title, director, producer
from movies
where rating in (?, ?, ?)
and release_dt between ? and ?
If you pass in ratings, it'll attempt to fit all of its items into one ?. However, if we pass in *ratings, and each item in ratings will take its place in the in() clause. Thus, we pass the tuple (*ratings, st_dt, end_dt) to cursor.execute().
The problem is your tuple. The ODBC connection is expecting a string to construct the query and you are sending a python tuple. And remember that you have to get the string quoting correct. I'm assuming that the number of ratings you will be looking for varies. There is probably a better way, but my pyodbc tends to be simple and straightforward.
Try the following:
import datetime
import pyodbc
conn = pyodbc.connect("Driver={SQL Server};Server='dbserver',Database='db',
TrustedConnection=Yes")
def List2SQLList(items):
sqllist = "%s" % "\",\"".join(items)
return sqllist
cursor = conn.cursor()
ratings = ("PG-13", "PG", "G")
st_dt = datetime(2010, 1, 1)
end_dt = datetime(2010, 12, 31)
cursor.execute("""Select title, director, producer From movies
Where rating In (?) And release_dt Between ? And ?""",
List2SQLList(ratings), str(st_dt), str(end_dt))
Using python and MySQLdb, how can I check if there are any records in a mysql table (innodb)?
Just select a single row. If you get nothing back, it's empty! (Example from the MySQLdb site)
import MySQLdb
db = MySQLdb.connect(passwd="moonpie", db="thangs")
results = db.query("""SELECT * from mytable limit 1""")
if not results:
print "This table is empty!"
Something like
import MySQLdb
db = MySQLdb.connect("host", "user", "password", "dbname")
cursor = db.cursor()
sql = """SELECT count(*) as tot FROM simpletable"""
cursor.execute(sql)
data = cursor.fetchone()
db.close()
print data
will print the number or records in the simpletable table.
You can then test if to see if it is bigger than zero.