So, i'm building a little tool to save errors and their solutions as a knowledge base. It is stored in a SQL Database (i'm using pyodbc). The users don't have access to the database, just the GUI.
The GUI has three buttons, one for add a new ErrorID, one for search for an ErrorID (if it exists in the database), and one for delete.
It has too a text panel where it should show the solution of the error searched.
So, need to extract the columns and rows of my DB and put them in a dictionary, then I need to run through that dict in search for the error searched and show it solution on the text panel.
My issue is that the dict that I get has this form: {{('Error', 1): ('Solution', one)}} and so on, so I cannot seem to run succesfully through it and show ONLY the word "one" on the text panel.
In other words, when I search "1", it should print "one" on the text panel.
My question is: How can I transform this {{('Error', 1): ('Solution', one)}} INTO this {"1": "one"} ?
Edit:
Sorry, I forgot to add some parts of my code.
This part is what appends every row in a dict:
readsql = sql_conn.sql()
readsql.sqlread()
columns = [column[0] for column in readsql.cursorv.description]
results = []
for row in readsql.cursorv.fetchall():
results.append(zip(columns, row))
results = dict(results)
I tried to do this, like storing part of the dict that I know it's gonna show on a string named, well, string. And then compare it to 'k' in the for loop, but it doesn't work.
string = "('Error', " + str(error_number) + ")"
for k in results.keys():
if k == string:
post = readsql.cursorv.execute('SELECT * FROM master.dbo.Errors WHERE Error = (?)', (error_number))
text_area.WriteText(post)
break
Here is sql class:
class sql():
def __init__(self):
self.conn = pyodbc.connect('Driver={SQL Server};'
'Server=.\SQLEXPRESS;'
'Database=master;'
'Trusted_Connection=yes;')
# cursor variable
self.cursorv = self.conn.cursor()
def sqlread(self):
self.cursorv.execute('SELECT * FROM master.dbo.Errors')
Your problem comes from the following code unnecessarily zipping the column headers into the resulting dict:
for row in readsql.cursorv.fetchall():
results.append(zip(columns, row))
results = dict(results)
You can instead construct the desired dict directly from the sequence of tuples returned by the fetchall method:
results = dict(readsql.cursorv.fetchall())
Related
If in the combo_Nations combobox I select a specific Country (the country name is extracted from the "All_Nations" table in the "Nations_name" column), I would like to get the corresponding ID_Nations of the respective selected country (ID_Nations is found in the same table "All_Nations"). The ID will be automatically inserted in another table of the database together with other fields, after clicking on the "Add" button with the function def_add.
(If you are wondering why I need to automatically insert the ID_Nation from another table, the reason is that I need it for relational purposes for a Foreign Key)
So I would like to take this data circled in red from the screenshot (I cannot attach images here): screenshot. Basically the table is this:
CREATE TABLE "All_Nations" (
"ID_Nations" INTEGER, >>> example: 453
"Nations_name" INTEGER, >>> example: England
PRIMARY KEY("ID_Nations" AUTOINCREMENT)
);
So the combobox combo_Nations with the def combo_nations function fetches Nations_name only. While def id_nations should extract ID_Nations corresponding to the selected Nation from the combobox. EXAMPLE: For example, if I select England in the combobox, id_nations should automatically save me 453. The two data will be saved in a new table thanks to the function def add () (I only wrote part of this code so as not to dwell on it, showing you the minimum to understand what I serves, because it works correctly). The new table will save a row in which there will be: ID, Nations_name, ID_Nations, other data of various kinds.
The error I get is this TypeError: 'list' object is not callable, to be precise this is it:
db.insert(nations.get(),.....other data .get(), id_campionati_value())
TypeError: 'list' object is not callable
I don't know if I wrote the def id_nations function incorrectly, if the problem is in db.insert or if the problem is both
HERE'S THE CODE. There is some problem in the def id_nations function I created:
def id_nations():
nations = combo_Nations.get()
cursor.execute('SELECT ID_Nations FROM All_Nations WHERE Nations_name=?',(nations,))
result = cursor.fetchone()
return result
#this is ok. no problem
def combo_nations():
campionato = combo_Nations.get()
cursor.execute('SELECT Nations_name FROM All_Nations')
result=[row[0] for row in cursor]
return result
#Combobox Nations
lbl_Nations = Label(root, text="Nations", font=("Calibri", 11), bg="#E95420", fg="white")
lbl_Nations.place(x=6, y=60)
combo_Nations = ttk.Combobox(root, font=("Calibri", 11), width=30, textvariable=nations, state="readonly")
combo_Nations.place(x=180, y=60)
combo_Nations.set("Select")
combo_Nations['values'] = combo_nations()
combo_Nations.bind('<<ComboboxSelected>>', combo_city)
The data will be saved here (everything works fine):
def add():
id_campionati_value=id_campionati()
db.insert(nations.get(),.....other data .get(), id_campionati_value())
messagebox.showinfo("Success", "Record Inserted")
clearAll()
dispalyAll()
Theoretically I understand what the problem is, but I don't know how to solve it. I'm just starting out with Python. Can you show me the solution in the answer? Thanks
P.S: I removed unnecessary parts of code so as not to lengthen the question, but there is everything you need to understand. Thanks
UPDATE
Rest of the code to insert data into the database. I write it in a somewhat confusing way because I have two files (main.py and db.py) and the code is split. If I write all the code of the two files it is too long.
def getData(event):
selected_row = tv.focus()
data = tv.item(selected_row)
global row
row = data["values"]
#print(row)
nations.set(row[1])
....
id_nations.set(row[10])
# Insert Function
def insert(self, nations,....., id_nations):
self.cur.execute("insert into All_Nations values (NULL,?,?,?,?,?,?,?,?,?,?)",
(nations,..., id_nations))
self.con.commit()
Based on the error, id_campionati_value is a list. When you do id_campionati_value() you're calling the list, and as the error says, you can't call the list.
Since you seem to need a single value, you first need to define id_nations to return the id rather than a list. It would look something like in the following example, returning the first column of the matching row.
I would also suggest renaming the function to make it more clear that it's fetching something from the database:
def get_id_nations():
nations = combo_Nations.get()
cursor.execute('SELECT ID_Nations FROM All_Nations WHERE Nations_name=?',(nations,))
result = cursor.fetchone()
return result[0]
Then, you can use this value when calling db.insert:
id_nations = get_id_nations()
db.insert(nations.get(),.....other data .get(), id_nations)
this is my first post on stack overflow... thanks in advance for any and all help
I am very new to programming and i created a function in Python to dynamically search an sqlite3 database rather than entering in tons of queries. i will show the code and try to explain what i intended to happen at all the stages. In short my cursor.fetchall() always evaluates to empty even when i am certain there is a value in the database that it should find.
def value_in_database_check(table_column_name: str, value_to_check: str):
db, cursor = get_connection() # here i get a database and cursor connection
for tuple_item in schema(): # here i get the schema of my database from another function
if tuple_item[0] == "table": # now I check if the tuple is a table
schema_table = tuple_item[4] # this just gives me the table info of the tuple
# this lets me know the index of the column I am looking for in the table
found_at = schema_table.find(table_column_name)
# if my column value was found I will enter this block of code
if not found_at == -1:
table_name = tuple_item[1]
to_find_sql = "SELECT * FROM {} WHERE ? LIKE ?".format(table_name)
# value_to_check correlates to table_column_name
# example "email", "this#email.com"
cursor.execute(to_find_sql, (table_column_name, value_to_check))
# this always evaluates to an empty list even if I am certain that the
# information is in the database
fetch_to_find = cursor.fetchall()
if len(fetch_to_find) > 0:
return True
else:
return False
I believe ? can only be used as placeholder for values, not for names of tables or - as you try to do - columns. A likely fix (haven't tested it though):
to_find_sql = "SELECT * FROM {} WHERE {} LIKE ?".format(table_name, table_column_name)
cursor.execute(to_find_sql, (value_to_check, ))
I am implementing a student database project which has multiple tables such as student,class,section etc
I wrote a delete_table function which takes parameters table name and value to delete a row from a specific table but there seems to be some sort of syntax error in my code :
def delete_tables(tab_name,attr,value):
c.execute("delete from table=:tab_name where attribute=:attr is value=:value ",{'tab_name':tab_name, 'attr': attr, 'value': value})
input :
delete_tables('section','sec_name','S1')
error text :
c.execute("delete from table=:tab_name where attribute=:attr is value=:value ",{'tab_name':tab_name, 'attr': attr, 'value': value})
sqlite3.OperationalError: near "table": syntax error
I've tried all mentioned answers and what y'all are suggesting is that it'll also be insecure even if it works out. So Do i Have to write functions to delete every table individually instead of going for one single function, and is there any other alternative to this problem where I need not keep on writing n functions for n number of tables?????
Thanks in Advance :))
The problem is that you can't use parametrized queries (that :tab_name) on things others than values (? not sure I am using the right term): table names, column names and SQL keywords are forbidden.
where age > :max_age is OK.
where :some_col > :max_age is not.
where age :comparison_operator :max_age is not OK.
Now, you can build your own query using string concatenation or f strings, but... 🧨 this is a massive, massive SQL injection risk. See Bobby Tables Not to mention that concatenating values into SQL query strings quickly runs into issues when you have to deal with characters, numbers or None. (None => NULL, characters need quotes, numbers dont).
You could possibly build a query using string substitutions that accept only known values for the table and column names and then drives the delete criteria value using a parametrized query on :value.
(While this seems restrictive, letting a random caller determine which tables to delete is just not safe in the least).
Something like:
delete_tables(tab_name,attr,value):
safe_tab_name = my_dict_of_known_table_names[tab_name]
safe_attr = my_dict_of_known_column_names[attr]
# you have to `=`, not `is` here👇
qry = f"delete from {safe_tab_name} where {safe_attr} = :value "
# not entirely sure about SQLite's bind/parametrized syntax.
# look it up if needed.
c.execute(qry, dict(value = value))
Assuming a user only enters value directly, that at least is protected from SQL injection.
You need to have a look at what will be the exact SQL command that will be executed in the python method.
For the method call delete_tables('section', 'sec_name', 'S1') the SQL command that will be generated will be
delete from table=section where attribute=sec_name is value=S1
This will be an invalid command in SQL. The correct command should be
delete from section where sec_name='S1'
So you need to change your python function accordingly. The changes that need to be done should be as follows:
def delete_tables(tab_name, attr, value):
c.execute("delete from :tab_name where :attr = ':value'",
{'tab_name': tab_name, 'attr': attr, 'value':value})
def delete_tables(tab_name, attr, value):
c.execute("delete from " + tab_name + "where " + attr + " = " + value)
I think something like that will work, the issue is that you are trying to modify an attribute but its name is always attribute, for that you would like to make it a parameter in order to properly handle it.
Hope it helped.
Edit:
Check this SQLite python
What the c.execute does is to 'execute' a SQL query, so, you can make something like c.execute("select * from clients") if you have a clients table.
execute makes a query and brings you the result set (if it is the case), so if you want to delete from your table using a normal SQL query you would type in the console delete from clients where client_id = 12 and that statement will delete the client with id equal to 12.
Now, if you are using SQLite in python, you will do
c.execute("delete from clients where client_id = 12")
but as you wish it to be for any table and any field (attribute) it turns in the table name, the field name and the value of that field being variables.
tableName = "clients"
field = "client_id"
value = "12" #must be string because you would have to cast it from int in the execute
"""
if value is a varchar you must write
value = "'12'" because the '' are needed.
"""
c.execute("delete from " + tableName + " where " + field + " = " + value)
and in the top of that, as you want it to be a function
def delete_tables(tableName, field, value):
c.execute("delete from " + tableName+ "where " + field + " = " + value)
Edit 2:
aaron's comment is true, it is not secure, the next step you would do is
def delete_tables(tab_name, attr, value):
#no ':value' (it limits the value to characters)
c.execute("delete from :tab_name where :attr = :value",
{'tab_name': tab_name, 'attr': attr, 'value':value})
It is from Vatsal's answer
I'm new to Python and SQL, but I need to delete multiple entries in a table on a remote server. I would also prefer to preserve the input structure of a function I was given because it is used in codes of other colleagues.
I came up with a solution that does the job similar to the one presented below. I deliberately avoided using any sort of executemany() methods because (if I am not mistaken,) they can be terribly slow.
import sqlalchemy as sa
import urllib
def delete_rows(tablename, colnames, data):
"""
tablename - name of db table with dbname. like RiskData..factors
colnames - column names to use as keys in deletion
data - a list of tuples, a tuple per row, number of elements in each
tuple must is the same as number of column names
"""
# Connection details
engine = sa.create_engine("mssql+pyodbc://some_server")
connection = self.engine.connect()
# Data has to be a list - throw an exception if it is not
if (not (type(data) is list)):
raise Exception('Data must be a list');
# assemble one long query statement
query = "DELETE " + tablename + " WHERE "
query_dp = "or (" + " = '{}' and ".join(colnames) + "= '{}') "
query_tail = ""
for record_entries in data:
query_tail += query_dp.format(*record_entries)
query += query_tail[3:-1]
connection.execute(query)
connection.close()
I would like to ask whether this solution is inefficient and will be slow for a large amounts of data? If so, what would a more elegant solution be?
Don't know about speed, but as far as elegance goes, don't use string formatting for passing values to SQL queries. Since you're already using SQLAlchemy, you can leverage its query building capabilities:
def delete_rows(tablename, colnames, data):
"""
tablename - name of db table with dbname. like RiskData..factors
colnames - column names to use as keys in deletion
data - a list of tuples, a tuple per row, number of elements in each
tuple must is the same as number of column names
"""
# Data has to be a list - throw an exception if it is not
if not isinstance(data, list):
raise Exception('Data must be a list');
# Connection details
engine = sa.create_engine("mssql+pyodbc://some_server")
# Create `column()` objects for producing bindparams
cols = [sa.column(name) for name in colnames]
# Create a list of predicates, to be joined with OR
preds = []
for record_entries in data:
pred = sa.and_(*[c == e for c, e in zip(cols, record_entries)])
preds.append(pred)
# assemble one long query statement
query = sa.table(tablename).delete().where(sa.or_(*preds))
with engine.begin() as connection:
connection.execute(query)
Whether or not executemany() is slow depends on the DB-API driver in use. In case of pyodbc this used to be true, but there's been work to improve it.
This is how I get all of the field topicid values in Topics table.
all_topicid = [i.topicid for i in session.query(Topics)]
But when Topics table have lots of values, the vps killed this process. So is there some good method to resolve this?
Thanks everyone. I edit my code again, My code is below:
last = session.query(Topics).order_by('-topicid')[0].topicid
all_topicid = [i.topicid for i in session.query(Topics.topicid)]
all_id = range(1, last+1)
diff = list(set(all_id).difference(set(all_topicid)))
I want to get diff. Now it is faster than before. So are there other method to improve this code?
you could try by changing your query to return a list of id's with something like:
all_topic_id = session.query(Topics.topicid).all()
if the table contains duplicate topicid's you could add distinct to the above to return unique values
from sqlalchemy import distinct
all_topic_id = session.query(distinct(Topics.topicid)).all()
if this still causes an issue I would probably go for writing a stored procedure that returns the list of topicid's and have sqlalchemy call it.
for the second part I would do something like the below.
from sqlalchemy import distinct, func
all_topic_id = session.query(distinct(Topics.topicid)).all() # gets all ids
max_id = session.query(func.max(Topics.topicid)).one() # gets the last id
all_ids = range(1, max_number[0] + 1)) # creates list of all id's
missing_ids = list(set(all_topic_ids) - set(max_id)) # creates a list of missing id's