Python SQLite Data inserted into table disappearing? [duplicate] - python

This question already has answers here:
Sqlite insert query not working with python?
(2 answers)
Closed 4 years ago.
So this is confusing the hell out of me, and the title doesn't really explain it properly. I have a webpage which, when a user selects from a dropdown list makes an AJAX call to a url which is handled by flask. It first issues a call to /teamSelected, where my python scrapes some information about the football match between those two teams, puts it in an accordingly named table and returns.
After this, the second AJAX call is made to /requestCommentary where in python i then retrieve this data from the table and return it.
Problem:
When I first make the call to /teamSelected, I have code which drops the table if it exists. After which I check if the table exists (seems strange but dropping the table everytime its called is just me making sure the if part of the program is entered so I can test everything is working in there). After which if the table doesn't exist, it enters the if portion, where it creates the table, scrapes the data and stores it in the table. If I then try to print the contents of the table, it spits them out perfectly, despite telling me there exists only 1 row?
However if I remove the code at the start which drops the table, then try to print the contents, it prints nothing.
To me this makes very little sense. If the table doesnt exist, I make it, fill it with data and I can access it. However if I comment out the drop table statement, next call the table exists and isn't dropped, so I should be able to access it, but there's no longer data in there. If I were to guess, it's almost as if the table is made only for that that call, where it is then destroyed afterwards, so subsequent calls can't access it?
/teamSelected - Scrapes data, adds to database
#app.route("/teamSelected", methods=['POST', 'GET'])
def new():
try:
connection = sqlite3.connect("commentary.db")
cursor = connection.cursor()
except:
print("COULD NOT CONNECT TO DATABASE")
data = request.get_data() #gets data passed via AJAX call
splitData = data.decode().replace("\"", "").split("__") #data contains different elements split up by "__"
homeTeam = splitData[0]
awayTeam = splitData[1]
tableName = homeTeam + awayTeam + splitData[2] #unique table name
cursor.execute("DROP TABLE if exists "+tableName) #drops table to ensure enters if statement
cursor.execute("SELECT count(*) FROM sqlite_master WHERE type='table' AND name='"+tableName+"';")
result = cursor.fetchone()
number_of_rows = result[0]
print("R O W S " + str(number_of_rows)) #Always prints 0
if(number_of_rows == 0):
create_table_string = "create table if not exists '"+ tableName + "' (id INTEGER PRIMARY KEY, commentary TEXT, time TEXT)"
cursor.execute(create_table_string)
def scrapeInfo():
...
#scraping stuff
...
maxUpdate = 5
updateNumber = 0
while updateNumber < maxUpdate:
cursor.execute("INSERT INTO "+tableName+"(commentary, time) VALUES(?,?)", (commentaryUpdates[updateNumber], times[updateNumber]))
#inserts scraped data into table
updateNumber += 1
cursor.execute("select * from " + tableName)
rows = cursor.fetchall()
for row in rows:
print(row)
#THIS ^ works
cursor.execute("SELECT count(*) FROM sqlite_master WHERE type='table' AND name='" + tableName + "';")
result = cursor.fetchone()
number_of_rows = result[0]
print(number_of_rows)
#This ^ prints 1 despite the above for loop printing 5 rows
return jsonify("CLEAN")
return scrapeInfo()
#This is only hit if the table exists, meaning it doesn't enter
#the if statement, so this section is only hit when the drop table
#statement above is commented out. Here, this prints nothing, no idea why.
cursor.execute("select * from " + tableName)
rows = cursor.fetchall()
for row in rows:
print(row)
/RequestCommentary - retrieves data from table
#app.route("/requestCommentary", methods=['POST', 'GET'])
def getCommentary():
data = request.get_data()
splitData = data.decode().replace("\"", "").split("__")
homeTeam = splitData[0]
awayTeam = splitData[1]
tableName = homeTeam + awayTeam + splitData[2]
#Here I'm trying to retrieve the data from the table, but nothing is printed
cursor.execute("select * from " + tableName)
rows = cursor.fetchall()
for row in rows:
print(row)
return jsonify("CLEAN")
To recap, unexpected behaviour:
Data is only retrieved from table if it is first dropped, then has data added
If table already exists from previous call where data was added, new call retrieves no data
Number_of_rows after insertion of the data prints 1 (could be relevant as should be printing 5)
Separate route /RequestCommentary cannot access table regardless
No exceptions being thrown
I could really use a hand on this, as I am completely stumped on what the issue is here and have been at this for hours.
After some more testing, I'm certain it's something to do with the scope of the tables being created. I'm not sure how or why, but I can only seem to access data I add to the table in that call, any data added from previous calls is non-existent, which makes me think somehow the tables data is local only to the call accessing it, and isn't global?

Just managed to figure it out. I knew it was something to do with making local changes not being seen globally. After looking around I realised this is the exact problem I'd be having if I wasnt using connection.commit() to save the changes being made. I've added it now, and the changes being made can now be seen by all calls and is working properly.

Related

How to escape a #/# (for example 6/8) in the name of a table from a database

I am currently trying to get a list of values from a table inside an SQL database. The problem is appending the values due to the table's name in which I can't change. The table's name is something like Value123/123.
I tried making a variable with the name like
x = 'Value123/123'
then doing
row.append(x)
but that just prints Value123/123 and not the values from the database
cursor = conn.cursor()
cursor.execute("select Test, Value123/123 from db")
Test = []
Value = []
Compiled_Dict = {}
for row in cursor:
Test.append(row.Test)
Value.append(row.Value123/123)
Compiled_Dict = {'Date&Time': Test}
Compiled_Dict['Value'] = Value
conn.close()
df = pd.DataFrame(Compiled_Dict)
The problem occurs in this line
Value.append(row.Value123/123)
When I run it I get that the database doens't have a table named 'Value123'. Since I think it's trying to divide 123 by 123? Unfortunately the table in the database is named like this and I cannot change it, so how do I pull the values from this table?
Edit:
cursor.execute("select Test, Value123/123 as newValue from db")
I tried this and it worked thanks for the solutions. Suggested by Yu Jiaao

Iterating over table names and updating queries

I'm using PyMySQL to Update Data by iterating through table names , But the problem is that I was able to update the data from the first table only
the loop is not working after the first table
Ive tried using the fetchall() to get the table names and loop by that but it didnt work
def update():
global table_out
global i
cursor.execute("USE company;")
cursor.execute("SHOW TABLES;")
lst=[]
for table_name in cursor:
lst.append(table_name)
emp_list=lst[0][0]
print(emp_list)
i=0
while i<=len(lst)-1:
state="""SELECT `employee_name` from `%s` WHERE attended=0 """%(employees)
out=cursor.execute(state)
result=cursor.fetchall()
i+=1
for records in result:
table_out=''.join(records)
print(table_out)
db.commit()
try:
sql="""UPDATE `%s` SET `attended` = True WHERE `employee_name` = '%s' ;"""%(emp_list,table_out)
cursor.execute(sql)
I expect to iterate over all the tables in that database when this function is called
I'm not sure that your approach is quite optimal.
[In your middle block, employees is undefined - should that be emp_lst?]
Your select statement appears to reduce down to
SELECT employee_name FROM $TABLE WHERE attended=0;
which you then want to go through each table and change the value. Have you considered using the UPDATE verb instead? https://dev.mysql.com/doc/refman/8.0/en/update.html
UPDATE $table SET attended=True WHERE attended=0;
If that works for your desired outcome, then that will save you quite a few table scans and double handling.
So perhaps you could refactor your code along these lines:
def update_to_True():
# assume that cursor is a global
cursor.execute("USE company;")
tables = cursor.execute("SHOW TABLES;").fetchall()
for el in tables:
STMT="""UPDATE {0} SET attended=True WHERE attended=0;".format(el)
res = cursor.execute(el)
# for debugging....
print(res)
that's it!

How to store and query hex values in mysqldb

I want to use a thermal printer with raspberry pi. I want to receive the printer vendor id and product id from mysql database. My columns are of type varchar.
My code is
import MySQLdb
from escpos.printer import Usb
db= MySQLdb.connect(host=HOST, port=PORT,user=USER, passwd=PASSWORD, db=database)
cursor = db.cursor()
sql = ("select * from printerdetails")
cursor.execute(sql)
result = cursor.fetchall()
db.close()
for row in result:
printer_vendor_id = row[2]
printer_product_id = row[3]
input_end_point = row[4]
output_end_point = row[5]
print printer_vendor_id,printer_product_id,input_end_point,output_end_point
Printer = Usb(printer_vendor_id,printer_product_id,0,input_end_point,output_end_point)
Printer.text("Hello World")
Printer.cut()
but it doesnot work. the id's are string. print command shows 0x154f 0x0517 0x82 0x02.in my case
Printer = Usb(0x154f,0x0517,0,0x82,0x02)
works fine.How could I store the same id's to the database and use them to configure the printer
Your problem is that your call to Usb is expecting integers, which works if you call it like this
Printer = Usb(0x154f,0x0517,0,0x82,0x02)
but your database call is returning tuples of hexadecimal values stored as strings. So you need to convert those strings to integers, like this:
for row in result:
printer_vendor_id = int(row[2],16)
printer_product_id = int(row[3],16)
input_end_point = int(row[4],16)
output_end_point = int(row[5],16)
Now if you do
print printer_vendor_id,printer_product_id,input_end_point,output_end_point
you will get
(5455, 1303, 130, 2)
which might look wrong, but isn't, which you can check by asking for the integers to be shown in hex format:
print ','.join('0x{0:04x}'.format(i) for i in (printer_vendor_id,printer_product_id,input_end_point,output_end_point))
0x154f,0x0517,0x0082,0x0002
I should point out that this only works because your database table contains only one row. for row in result loops through all of the rows in your table, but there happens to be only one, which is okay. If there were more, your code would always get the last row of the table, because it doesn't check the identifier of the row and so will repeatedly assign values to the same variables until it runs out of data.
The way to fix that is to put a where clause in your SQL select statement. Something like
"select * from printerdetails where id = '{0}'".format(printer_id)
Now, because I don't know what your database table looks like, the column name id is almost certainly wrong. And very likely the datatype also: it might very well not be a string.

Put retrieved data from MySQL query into DataFrame pandas by a for loop

I have one database with two tables, both have a column called barcode, the aim is to retrieve barcode from one table and search for the entries in the other where extra information of that certain barcode is stored. I would like to have bothe retrieved data to be saved in a DataFrame. The problem is when I want to insert the retrieved data into DataFrame from the second query, it stores only the last entry:
import mysql.connector
import pandas as pd
cnx = mysql.connector(user,password,host,database)
query_barcode = ("SELECT barcode FROM barcode_store")
cursor = cnx.cursor()
cursor.execute(query_barcode)
data_barcode = cursor.fetchall()
Up to this point everything works smoothly, and here is the part with problem:
query_info = ("SELECT product_code FROM product_info WHERE barcode=%s")
for each_barcode in data_barcode:
cursor.execute(query_info % each_barcode)
pro_info = pd.DataFrame(cursor.fetchall())
pro_info contains only the last matching barcode information! While I want to retrieve all the information for each data_barcode match.
That's because you are consistently overriding existing pro_info with new data in each loop iteration. You should rather do something like:
query_info = ("SELECT product_code FROM product_info")
cursor.execute(query_info)
pro_info = pd.DataFrame(cursor.fetchall())
Making so many SELECTs is redundant since you can get all records in one SELECT and instantly insert them to your DataFrame.
#edit: However if you need to use the WHERE statement to fetch only specific products, you need to store records in a list until you insert them to DataFrame. So your code will eventually look like:
pro_list = []
query_info = ("SELECT product_code FROM product_info WHERE barcode=%s")
for each_barcode in data_barcode:
cursor.execute(query_info % each_barcode)
pro_list.append(cursor.fetchone())
pro_info = pd.DataFrame(pro_list)
Cheers!

fetchall( ) not working on SQLite for certain records

I have some data stored into a SQLite Database and a Python program that is supposed to retrieve them. The user inputs two dates (e.g. 01/01/2010 - 01/01/2013) and the program should make some analysis on the data going from 01/01/2010 to 01/01/2013.
The way the code is retrieving the data is the following:
con = lite.connect("data.db")
cur = con.cursor()
cur.execute('SELECT ROWID from Table WHERE Date IS "01/01/2010"')
rowDate1 = cur.fetchall()
cur.execute('SELECT ROWID from Table WHERE Date IS "01/01/2013"')
rowDate2 = cur.fetchall()
The two variables rowDate1 and rowDate2 are the inferior and superior extremes to be used to execute the SQL statement that will give back the values to be used in the Python program.
Basically, if I want to run my analysis for the period 01/01/2010 - 01/01/2013, the variables rowDate1 and rowDate2 will contain the row IDs to be used for the data retrieving.
So, until a certain date, specifically 01/01/2010, the variable rowDate1 is giving back the rowID number with the following sample output:
rowDate1 == [(6002,)]
After 01/01/2010, though, the return of the cur.fetchall() is empty: rowDate1 = []
I have checked the data into the database and they exist, they have a rowID exactly like the others.
Moreover, if the data wouldn't exist, then even the variable rowDate2 should return an empty value, which is not the case.
In fact, if I would execute:
cur.execute('SELECT ROWID from Table WHERE Date IS "01/01/2012"')
rowDate1 = cur.fetchall()
rowDate2 = cur.fetchall()
print rowDate1, rowDate2
I would get the following output:
[] [(6002,)]
I know, it's absurde, but I really don't understand how this could happen.
I fear this is a bug of the code and I've really no clue. Does anyone have an idea of how can I move to detect it, what it could depend on?
NOTE: there is not really much more I can post about the remaining part of the code. Exactly the same statement works in the attribution of the variable rowDate2 and not in the variable rowDate1; I mean, if I input from the GUI the date frame "01/01/2012 - 01/01/2012", the variable rowDate1 won't have a value while the variable rowDate2 will. But it's weird because they are the same statements, just the rowDate1 is executed before!?
COMPLETE CODE:
con = None
con = lite.connect(dPath.adresseBDD + '/' + symb + '.db')
cur = con.cursor()
cur.execute('SELECT ROWID from ' + frequence + ' WHERE Date IS "' + date1+'"' )
rowDate1= cur.fetchall()
cur.execute('SELECT ROWID from ' + frequence + ' WHERE Date IS "' + date2+'"' )
rowDate2= cur.fetchall()

Categories