Passing arguments to mysql using python - python

I am trying to write a program in Python that accepts the user inputs and queries from the MySQL database.
My database has the following tables:-
Departments(dept_no (primary key), dept_name)
Employees(emp_no(primary key), f_name, l_name, dob, hire_date)
Salaries(emp_no(primary key), salary, from_date(primary key), to_date)
When I give the following input:- Display the employees with salary greater than 20000.
Then the program should perform the following action:-
from sqlite3 import Error
from MySQLdb import connect
def mysql_code():
try:
with connect(host="localhost", user="root", password="root", database="employees") as connection:
with connection.cursor() as cursor:
cursor.execute("SELECT e.emp_no, e.first_name, e.last_name, s.salary from employees e inner join "
"salaries s on e.emp_no = s.emp_no where s.salary > '20000'")
records = cursor.fetchall()
print(records)
print("Total results found = ", cursor.rowcount)
except Error as e:
print(e)
and display the results.
Is it possible to do so or do I have to write code for each possible query?
I previously used:
cursor.execute("SELECT * FROM {} WHERE {} like %s".format(table, column), (text,))
When I defined the query and gave the user options to choose from where I wrote a query for each of the possible user inputs like
to display all records,
search records with the first name and so on. When the user chose an option the result was displayed.
Now I want the user to give inputs such as
Display employees with salaries greater than 20000 working in dept_no d002. or similar queries.
The program should accept queries in the form of a string from the user.
The code should join the tables and display the result by joining the emp_no, first_name, last_name, salary, dept_no from the tables employees, salaries and departments respectively.

you have an error in your code the like after the on comparison is wrong
from sqlite3 import Error
from MySQLdb import connect
def mysql_code():
try:
with connect(host="localhost", user="root", password="root", database="employees") as connection:
with connection.cursor() as cursor:
cursor.execute("SELECT e.emp_no, e.first_name, e.last_name, s.salary from employees e inner join "
"salaries s on e.emp_no = s.emp_no where s.salary > '20000'")
records = cursor.fetchall()
print(records)
print("Total results found = ", cursor.rowcount)
except Error as e:
print(e)

If I'm understanding your question correctly, you want to perform customizable MySQL queries in Python, creating a constant function and passing it the query.
For this I would use f strings. With this you would create a string query
cursor.execute(f"SELECT {values_to_select} \
from {table_name} \
where {filters} \
{more_stuff/extra_queries}")
This way you can pass the values to the function and it will perform the query with the values specified.
Note: As a test this is fine, but if this code is going to be used in production, giving the end user access to queries can be a huge security risk, checkout SQL injection to see how these kinds of attacks happen. I would recommend to sanitize the given strings first so they can't create custom SQL queries.

Related

sqlite3 parameters of unsupported type

I am trying to make a simple web budget app with a SQL database but I keep getting ValueError: parameters are of an unsupported type
from flask import Flask, render_template, request
app = Flask(__name__)
#app.route("/")
def index():
return (render_template('index.html'))
#app.route("/start", methods=['GET', 'POST'])
def start():
import sqlite3
import pandas as pd
if request.method == 'POST':
connection = sqlite3.connect('transactions.db')
cursor = connection.cursor()
amount = request.form.get('amount')
description = request.form.get('description')
category = request.form.get('category')
print(amount, description, category)
cursor.execute(f"""INSERT INTO transactions (amount, description, category)
VALUES ({amount}, '{description}', '{category}')""", connection)
cursor.close()
connection.commit()
table = pd.read_sql("""SELECT * FROM transactions """, connection)
return(render_template('start.html'), value1 == table)
else:
return(render_template('start.html'))
here is the SQL database
'-- SQLite
CREATE TABLE transactions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
amount REAL,
description TEXT,
category TEXT
);
The inputs I used were
10.50, 'Dog Food', and 'Misc.'
Quick solution, do not pass connection and use SQL placeholder binding (with ?) rather than string formatting.
cursor.execute(f"""
INSERT INTO transactions (amount, description, category)
VALUES (?, ?, ?)
""", (amount, description, category))
Longer explanation with your original code:
cursor.execute(f"""INSERT INTO transactions (amount, description, category)
VALUES ({amount}, '{description}', '{category}')""", connection)
Your origin code is passing the connection as an argument to cursor.execute, and connection is a sqlite3 instance, which is unsupported as the supported argument for cursor.execute is either a sequence or dict for parameter substitution in your query.
Also see the actual definition of the argument, and why my solution code uses ? instead of doing string substitution:
https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.execute
execute(sql, parameters=(), /)
Execute SQL statement sql. Bind values to the statement
using placeholders that map to the sequence or dict parameters.
https://docs.python.org/3/library/sqlite3.html#sqlite3-placeholders
SQL operations usually need to use values from Python variables. However, beware of using Python’s string operations to assemble queries, as they are vulnerable to SQL injection attacks (see the xkcd webcomic for a humorous example of what can go wrong):

Why am I getting "mysql.connector.errors.InternalError: Unread result found" when I submit my select options form in index.html [duplicate]

I am inserting JSON data into a MySQL database
I am parsing the JSON and then inserting it into a MySQL db using the python connector
Through trial, I can see the error is associated with this piece of code
for steps in result['routes'][0]['legs'][0]['steps']:
query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s')
if steps['travel_mode'] == "pub_tran":
travel_mode = steps['travel_mode']
Orig_lat = steps['var_1']['dep']['lat']
Orig_lng = steps['var_1']['dep']['lng']
Dest_lat = steps['var_1']['arr']['lat']
Dest_lng = steps['var_1']['arr']['lng']
time_stamp = leg['_sent_time_stamp']
if steps['travel_mode'] =="a_pied":
query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s')
travel_mode = steps['travel_mode']
Orig_lat = steps['var_2']['lat']
Orig_lng = steps['var_2']['lng']
Dest_lat = steps['var_2']['lat']
Dest_lng = steps['var_2']['lng']
time_stamp = leg['_sent_time_stamp']
cursor.execute(query,(travel_mode, Orig_lat, Orig_lng, Dest_lat, Dest_lng, time_stamp))
leg_no = cursor.fetchone()[0]
print(leg_no)
I have inserted higher level details and am now searching the database to associate this lower level information with its parent. The only way to find this unique value is to search via the origin and destination coordinates with the time_stamp. I believe the logic is sound and by printing the leg_no immediately after this section, I can see values which appear at first inspection to be correct
However, when added to the rest of the code, it causes subsequent sections where more data is inserted using the cursor to fail with this error -
raise errors.InternalError("Unread result found.")
mysql.connector.errors.InternalError: Unread result found.
The issue seems similar to MySQL Unread Result with Python
Is the query too complex and needs splitting or is there another issue?
If the query is indeed too complex, can anyone advise how best to split this?
EDIT As per #Gord's help, Ive tried to dump any unread results
cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng))
leg_no = cursor.fetchone()[0]
try:
cursor.fetchall()
except mysql.connector.errors.InterfaceError as ie:
if ie.msg == 'No result set to fetch from.':
pass
else:
raise
cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng, time_stamp))
But, I still get
raise errors.InternalError("Unread result found.")
mysql.connector.errors.InternalError: Unread result found.
[Finished in 3.3s with exit code 1]
scratches head
EDIT 2 - when I print the ie.msg, I get -
No result set to fetch from
All that was required was for buffered to be set to true!
cursor = cnx.cursor(buffered=True)
The reason is that without a buffered cursor, the results are "lazily" loaded, meaning that "fetchone" actually only fetches one row from the full result set of the query. When you will use the same cursor again, it will complain that you still have n-1 results (where n is the result set amount) waiting to be fetched. However, when you use a buffered cursor the connector fetches ALL rows behind the scenes and you just take one from the connector so the mysql db won't complain.
I was able to recreate your issue. MySQL Connector/Python apparently doesn't like it if you retrieve multiple rows and don't fetch them all before closing the cursor or using it to retrieve some other stuff. For example
import mysql.connector
cnxn = mysql.connector.connect(
host='127.0.0.1',
user='root',
password='whatever',
database='mydb')
crsr = cnxn.cursor()
crsr.execute("DROP TABLE IF EXISTS pytest")
crsr.execute("""
CREATE TABLE pytest (
id INT(11) NOT NULL AUTO_INCREMENT,
firstname VARCHAR(20),
PRIMARY KEY (id)
)
""")
crsr.execute("INSERT INTO pytest (firstname) VALUES ('Gord')")
crsr.execute("INSERT INTO pytest (firstname) VALUES ('Anne')")
cnxn.commit()
crsr.execute("SELECT firstname FROM pytest")
fname = crsr.fetchone()[0]
print(fname)
crsr.execute("SELECT firstname FROM pytest") # InternalError: Unread result found.
If you only expect (or care about) one row then you can put a LIMIT on your query
crsr.execute("SELECT firstname FROM pytest LIMIT 0, 1")
fname = crsr.fetchone()[0]
print(fname)
crsr.execute("SELECT firstname FROM pytest") # OK now
or you can use fetchall() to get rid of any unread results after you have finished working with the rows you retrieved.
crsr.execute("SELECT firstname FROM pytest")
fname = crsr.fetchone()[0]
print(fname)
try:
crsr.fetchall() # fetch (and discard) remaining rows
except mysql.connector.errors.InterfaceError as ie:
if ie.msg == 'No result set to fetch from.':
# no problem, we were just at the end of the result set
pass
else:
raise
crsr.execute("SELECT firstname FROM pytest") # OK now
cursor.reset() is really what you want.
fetchall() is not good because you may end up moving unnecessary data from the database to your client.
The problem is about the buffer, maybe you disconnected from the previous MySQL connection and now it cannot perform the next statement. There are two ways to give the buffer to the cursor. First, only to the particular cursor using the following command:
import mysql.connector
cnx = mysql.connector.connect()
# Only this particular cursor will buffer results
cursor = cnx.cursor(buffered=True)
Alternatively, you could enable buffer for any cursor you use:
import mysql.connector
# All cursors created from cnx2 will be buffered by default
cnx2 = mysql.connector.connect(buffered=True)
cursor = cnx.cursor()
In case you disconnected from MySQL, the latter works for you.
Enjoy coding
If you want to get only one result from a request, and want after to reuse the same connexion for other requests, limit your sql select request to 1 using "limit 1" at the end of your request.
ex "Select field from table where x=1 limit 1;"
This method is faster using "buffered=True"
Set the consume_results argument on the connect() method to True.
cnx = mysql.connector.connect(
host="localhost",
user="user",
password="password",
database="database",
consume_results=True
)
Now instead of throwing an exception, it basically does fetchall().
Unfortunately this still makes it slow, if you have a lot of unread rows.
There is also a possibility that your connection to MySQL Workbench is disconnected. Establish the connection again. This solved the problem for me.
cursor.reset()
and then create tables and load entries
Would setting the cursor within the for loop, executing it, and then closing it again in the loop help?
Like:
for steps in result['routes'][0]['legs'][0]['steps']:
cursor = cnx.cursor()
....
leg_no = cursor.fetchone()[0]
cursor.close()
print(leg_no)

Can python cursor.execute accept multiple queries in one go?

Can the cursor.execute call below execute multiple SQL queries in one go?
cursor.execute("use testdb;CREATE USER MyLogin")
I don't have python setup yet but want to know if above form is supported by cursor.execute?
import pyodbc
# Some other example server values are
# server = 'localhost\sqlexpress' # for a named instance
# server = 'myserver,port' # to specify an alternate port
server = 'tcp:myserver.database.windows.net'
database = 'mydb'
username = 'myusername'
password = 'mypassword'
cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
#Sample select query
cursor.execute("SELECT ##version;")
row = cursor.fetchone()
while row:
print(row[0])
row = cursor.fetchone()
Multiple SQL statements in a single string is often referred to as an "anonymous code block".
There is nothing in pyodbc (or pypyodbc) to prevent you from passing a string containing an anonymous code block to the Cursor.execute() method. They simply pass the string to the ODBC Driver Manager (DM) which in turn passes it to the ODBC Driver.
However, not all ODBC drivers accept anonymous code blocks by default. Some databases default to allowing only a single SQL statement per .execute() to protect us from SQL injection issues.
For example, MySQL/Connector ODBC defaults MULTI_STATEMENTS to 0 (off) so if you want to run an anonymous code block you will have to include MULTI_STATEMENTS=1 in your connection string.
Note also that changing the current database by including a USE … statement in an anonymous code block can sometimes cause problems because the database context changes in the middle of a transaction. It is often better to execute a USE … statement by itself and then continue executing other SQL statements.
Yes, it is possible.
operation = 'SELECT 1; INSERT INTO t1 VALUES (); SELECT 2'
for result in cursor.execute(operation, multi=True):
But it is not a comprehensive solution. For example, in queries with two selections, you have problems.
Consider that two types of answers must be fetch all in the cursor!
So the best solution is to break the query to sub queries and do your work step by step.
for example :
s = "USE some_db; SELECT * FROM some_table;"
s = filter(None, s.split(';'))
for i in s:
cur.execute(i.strip() + ';')
in the pyodbc documentation should give you the example your looking for. more over in the GitHub wiki: https://github.com/mkleehammer/pyodbc/wiki/Objects#cursors
you can see an example here:
cnxn = pyodbc.connect(...)
cursor = cnxn.cursor()
cursor.execute("""
select user_id, last_logon
from users
where last_logon > ?
and user_type <> 'admin'
""", twoweeks)
rows = cursor.fetchall()
for row in rows:
print('user %s logged on at %s' % (row.user_id, row.last_logon))
from this example and exploring the code, I would say your next step is testing a multi cursor.execute("<your_sql_Querie>").
if this test works, maybe try and create a CLASS then create instances of that class for each query you want to run.
This would be the basic evolution of a developers effort of reproducing documentation...hope this helps you :)
Yes, you can results for multiple queries by using the nextset() method...
query = "select * from Table1; select * from Table2"
cursor = connection.cursor()
cursor.execute(query)
table1 = cursor.fetchall()
cursor.nextset()
table2 = cursor.fetchall()
The code explains it... cursors return result "sets", which you can move between using the nextset() method.

Python: How to access MySQL DB table using SQLAlchemy

I am making a python GUI that will look up the the status of a helpdesk ticket in a MySQL database. I connected python to an existing MySQL database with SQLAlchemy using the code below.
conn = mysql.connector.connect(user='root',
password='stuff',host='127.0.0.1',
database='mydb')
c = conn.cursor()
I only need access to one of the columns, ticket_id, in a table called tickets. Basically I want to do this:
SELECT ticket_status FROM tickets WHERE ticket_id = 123;
What would be simplest way to do this?
The following code should work at fetching a single value. If you realize later you need to fetch more than one value you can change fetchone() to fetchall()
try:
sql = '''
SELECT ticket_status FROM tickets WHERE ticket_id = 123
'''
c.execute(sql)
result = c.fetchone()
except Exception as e:
raise Exception(e)

SQL Select where not working

I am using Python/Flask and trying to query my DB.
conn = sqlite3.connect('./flaskdb.db')
cur = conn.cursor()
cur.execute('SELECT email FROM users WHERE email=\'%s\'', "name")
I have 2 columns, email, password and the value name, password as one of the row/entries.
Why isn't this working? I get the error:
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 0, and there are 7 supplied.
I think you are getting bogged down with using prepared statements here. Try this code:
conn = sqlite3.connect('./flaskdb.db')
cur = conn.cursor()
name = 'someone#somewhere.com'
cur.execute('SELECT email FROM users WHERE email=?', (name,))
Corrections include using ? as a placeholder instead of %s, the latter which is what might be used for other databases. Also, if you want to bind a variable called name, then it too should not have quotes around it.
I have a solution:
cur.execute('SELECT password FROM users WHERE email=(?)', (email,))
you need it as a tuple and (?) as a placeholder.

Categories