I want to fetch the latest id in flask mysql - python

When I run the following query, it gives me the first id, I want the last id and I want the data in that id, how can I do it?
cursor = mysql.connection.cursor()
sorgu = "Select * From site2"
result = cursor.execute(sorgu)
if result > 0:
articles = cursor.fetchone()
return render_template("site.html",articles = articles)
else:
return render_template("site.html")

First retrieve the max ID from the table (assuming your IDs are incrementing upward as rows are added. This means the largest is the last one), then use it in your final query
sorgu = "SELECT * FROM site2 WHERE ID = (SELECT MAX(ID) FROM site2)"
If you have a timestamp column in your data base you could do MAX(timestap) to get the latest ID as well

Related

how to run update sql query based on select query results using python

I am trying to update a row in an sql table based on results of a select query using python. How to add same so that if results found in select query we should be able to run update query and print as list updated, if not exit saying no result found to update table
cursor = conn.cursor()
cursor.execute(
"select * from student where email = 'xyz.com'"
)
student = cursor.fetchall()
print(student)
for row in student:
cursor.execute(" Update student set value = 0 where email = 'xyz.com'")
`
You don't need to do a separate SELECT, you can just use an OUTPUT clause to have your UPDATE statement return the value(s) from the row(s) that are updated. For example, with pyodbc:
sql = """\
UPDATE student SET value = 0
OUTPUT INSERTED.student_number
WHERE email = 'xyz.com' AND (value <> 0 OR value IS NULL)
"""
student_numbers = crsr.execute(sql).fetchall()
print(student_numbers) # [(1001, ), (1003, )]

What is wrong with this code that it is not able to increment the stock by one in my Database

I have written this code in python and using SQLite to interact with my database. I want the user to be able to return an item which is purchased, which works functionally, but now I need to actually increment the stock by 1 when an item is returned.
The code so far is like this:
itemToReturn = input("Enter the OrderItemsID of the item you want to return: ")
# sql statement to locate record based on input
itemToReturnSQL = '''UPDATE OrderItems
SET OrderItemStatus = 'Returned'
WHERE OrderItemsID = ?
'''
locateItemSQL = c.execute("SELECT ProductID FROM OrderItems WHERE OrderItemsID= ?", (
itemToReturn,))
locateItem = c.fetchall()
returnStockSQL = '''UPDATE Products
SET ProductStock = ProductStock + 1
WHERE ProductID = ?'''
# executes the sql
conn.execute(itemToReturnSQL, (itemToReturn,))
conn.execute(returnStockSQL, (locateItem[0][0],))
# saves
conn.commit()
Orders are divided into 2 tables:
Order: OrderID (Auto increment) UserID (foreign key) Order Date
OrderItems: OrderItemsID (Auto) OrderID (foreign key) ProductID(Foreign Key)
No errors are given now, but the stock still does not increment.

Scan a single column in SQL Server for a data entry using python

Example of a column:
This is what I have tried. I only want to search based on a single column in the table. Lets says the table name is Employees. The input parameter is entered by the user in console.
exists = cursor.execute("SELECT TOP 1 * FROM Employees WHERE ID = ?", (str(input),))
print(exists)
if exists is None:
return False
else:
return True
I think this is what you are looking for:
insert_query = '''SELECT TOP 1 * FROM EmployeeTable WHERE ID = (?);''' # '?' is a placeholder
cursor.execute(insert_query, str(input))

Task solution for best performance

What's the best / fastest solution for the following task:
Used technology: MySQL database + Python
I'm downloading a data.sql file. It's format:
INSERT INTO `temp_table` VALUES (group_id,city_id,zip_code,post_code,earnings,'group_name',votes,'city_name',person_id,'person_name',networth);
INSERT INTO `temp_table` VALUES (group_id,city_id,zip_code,post_code,earnings,'group_name',votes,'city_name',person_id,'person_name',networth);
.
.
Values in each row differ.
Tables structures: http://sqlfiddle.com/#!9/8f10d6
A person can have multiple cities
A person can be only in one group or can be without group.
A group can have multiple persons
And i know from which country these .sql data are.
I need to split these data into 3 tables. And I will be updating data that are already in the tables and if not then I will create new row.
So I came up with 2 solutions:
Split the values from the file via python and then perform for each line 3x select + 3x update/insert in the transaction.
Somehow bulk insert the data into a temporary table and then manipulate with the data inside a database - meaning for each row in the temporary table I will perform 3 select queries (one to each actual table) and if I find row I will send 3x (update query and if not then I run insert query).
I will be running this function multiple times per day with over 10K lines in the .sql file and it will be updating / creating over 30K rows in the database.
//EDIT
My inserting / updating code now:
autocomit = "SET autocommit=0"
with connection.cursor() as cursor:
cursor.execute(autocomit)
data = data.sql
lines = data.splitlines
for line in lines:
with connection.cursor() as cursor:
cursor.execute(line)
temp_data = "SELECT * FROM temp_table"
with connection.cursor() as cursor:
cursor.execute(temp_data)
temp_data = cursor.fetchall()
for temp_row in temp_data:
group_id = temp_row[0]
city_id = temp_row[1]
zip_code = temp_row[2]
post_code = temp_row[3]
earnings = temp_row[4]
group_name = temp_row[5]
votes = temp_row[6]
city_name = temp_row[7]
person_id = temp_row[8]
person_name = temp_row[9]
networth = temp_row[10]
group_select = "SELECT * FROM perm_group WHERE group_id = %s AND countryid_fk = %s"
group_values = (group_id, countryid)
with connection.cursor() as cursor:
row = cursor.execute(group_select, group_values)
if row == 0 and group_id != 0: #If person doesn't have group do not create
group_insert = "INSERT INTO perm_group (group_id, group_name, countryid_fk) VALUES (%s, %s, %s)"
group_insert_values = (group_id, group_name, countryid)
with connection.cursor() as cursor:
cursor.execute(group_insert, group_insert_values)
groupid = cursor.lastrowid
elif row == 1 and group_id != 0:
group_update = "UPDATE perm_group SET group_name = group_name WHERE group_id = %s and countryid_fk = %s"
group_update_values = (group_id, countryid)
with connection.cursor() as cursor:
cursor.execute(group_update, group_update_values)
#Select group id for current row to assign correct group to the person
group_certain_select = "SELECT id FROM perm_group WHERE group_id = %s and countryid_fk = %s"
group_certain_select_values = (group_id, countryid)
with connection.cursor() as cursor:
cursor.execute(group_certain_select, group_certain_select_values)
groupid = cursor.fetchone()
#.
#.
#.
#Repeating the same piece of code for person and city
Measured time: 206 seconds - which is not acceptable.
group_insert = "INSERT INTO perm_group (group_id, group_name, countryid_fk) VALUES (%s, %s, %s) ON DUPLICATE KEY UPDATE group_id = %s, group_name = %s"
group_insert_values = (group_id, group_name, countryid, group_id, group_name)
with connection.cursor() as cursor:
cursor.execute(group_insert, group_insert_values)
#Select group id for current row to assign correct group to the person
group_certain_select = "SELECT id FROM perm_group WHERE group_id = %s and countryid_fk = %s"
group_certain_select_values = (group_id, countryid)
with connection.cursor() as cursor:
cursor.execute(group_certain_select, group_certain_select_values)
groupid = cursor.fetchone()
Measured time: from 30 to 50 seconds. (Still quite long, but it's getting better)
Are there any other better (faster) options on how to do it?
Thanks in advice, popcorn
I would recommend that you load the data into a staging table and do the processing in SQL.
Basically, your ultimate result is a set of SQL tables, so SQL is necessarily going to be part of the solution. You might as well put as much logic into the database as you can, to simply the number of tools needed.
Loading 10,000 rows should not take much time. However, if you have a choice of data formats, I would recommend a CSV file over inserts. inserts incur extra overhead, if only because they are larger.
Once the data is in the database, I would not worry much about the processing time for storing the data in three tables.

Delete all duplicated rows except max value from python

I have a MySQL database, that I am using from my Discord python bot with AIOMySQL, but I see that by error the bot created duplicated rows with the same ID but updated the values, and that last it's what I wanted. One example of my duplicated rows:
duplicated rows
So now I want to delete all the duplicated rows, except the one with max XP.
I did a backup first, and then I was trying to save in a list all the IDs, except the ones that already are in the list. And then for every ID delete all except the max value. Like in this code:
await cur.execute("SELECT ID FROM USUARIOS;")
r = await cur.fetchall()
uslist = []
for a in r:
for b in a:
if b in uslist:
pass
elif b not in uslist:
uslist.append(b)
for user in uslist:
await cur.execute("SELECT * FROM USUARIOS WHERE ID = {} ORDER BY XP LIMIT 1;".format(user))
r = await cur.fetchone()
uid = r[0]
print(uid)
xp = r[1]
await cur.execute("DELETE FROM USUARIOS WHERE ID = {} and xp != {};".format(uid, xp))
await conn.commit()
But when I saw the DB some rows were completelely deleted, including the max values.
Assuming you want to do this in MySQL:
SELECT * FROM table WHERE XP <> (SELECT MAX(XP) FROM table) GROUP BY ID, XP, GC
UNION
SELECT * FROM table WHERE XP = (SELECT MAX(XP) FROM table)

Categories