Inserting values in sqlite 3 but appearing as column names - python

Im using inputs from SQLite3 in python 3.5 to add user information to a data base. However once i obtain the data and insert into the database, rather than inserting the data into the columns it tells me there is no column.
The error I get is as follows:
Exception in Tkinter callback
Traceback (most recent call last):
File
"C:\Users\Luke_2\AppData\Local\Programs\Python\Python35-32\lib\tkinter__init__.py", line 1549, in call
return self.func(*args) File "C:\Users\Luke_2\Desktop\Computing\Coursework\live\current.py", line
303, in details
cur_user.execute("INSERT INTO LogIn(Email,Password) VALUES("+user+","+passw+")") sqlite3.OperationalError: no such column:
a
And the function in my code doing this is as follows:
def details():
user = email_sign.get()
user1 = email1_sign.get()
passw = password_sign.get()
password1 = password1_sign.get()
if user == user1 and passw == password1:
cur_user.execute("INSERT INTO LogIn(Email,Password) VALUES("+user+","+passw+")")
conn_user.commit()
else:
print("please try again")

It is because the request that you actually execute has no quotes around the values, so the names are interpreted as column names but the SQL engine. If you pass a and b respectively you will execute
INSERT INTO LogIn(Email,Password) VALUES(a,b)
when what is required is
INSERT INTO LogIn(Email,Password) VALUES('a','b')
But you should never to that! Building requests that way but hardcoding parameters in the request has been the cause of SQL injection problems for decades.
The correct way is to build a parameterized request:
cur_user.execute("INSERT INTO LogIn(Email,Password) VALUES(?,?)", (user, password))
simpler, smarter and immune to SQL injection...

Related

Apache Superset not loading table records/columns

I am trying to add a table in Superset. The other tables get added properly, meaning the columns are fetched properly by Superset. But for my table booking_xml, it does not load any columns.
The description of table is
After adding this table, when I click on the table name to explore it, it gives the following error
Empty query?
Traceback (most recent call last):
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/viz.py", line 473, in get_df_payload
df = self.get_df(query_obj)
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/viz.py", line 251, in get_df
self.results = self.datasource.query(query_obj)
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/connectors/sqla/models.py", line 1139, in query
query_str_ext = self.get_query_str_extended(query_obj)
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/connectors/sqla/models.py", line 656, in get_query_str_extended
sqlaq = self.get_sqla_query(**query_obj)
File "/home/superset/superset_venv/lib/python3.8/site-packages/superset/connectors/sqla/models.py", line 801, in get_sqla_query
raise Exception(_("Empty query?"))
Exception: Empty query?
ERROR:superset.viz:Empty query?
However, when I try to explore it using the SQL editor, it loads up properly. I have found the difference in the form_data parameter in the URL when loading from tables page and from SQL editor.
URL from SQL Lab view:
form_data={"queryFields":{"groupby":"groupby","metrics":"metrics"},"datasource":"192__table","viz_type":"table","url_params":{},"time_range_endpoints":["inclusive","exclusive"],"granularity_sqla":"created_on","time_grain_sqla":"P1D","time_range":"Last+week","groupby":[],"metrics":["count"],"all_columns":[],"percent_metrics":[],"order_by_cols":[],"row_limit":10000,"order_desc":true,"adhoc_filters":[],"table_timestamp_format":"smart_date","color_pn":true,"show_cell_bars":true}
URL from datasets list:
form_data={"queryFields":{"groupby":"groupby","metrics":"metrics"},"datasource":"191__table","viz_type":"table","url_params":{},"time_range_endpoints":["inclusive","exclusive"],"time_grain_sqla":"P1D","time_range":"Last+week","groupby":[],"all_columns":[],"percent_metrics":[],"order_by_cols":[],"row_limit":10000,"order_desc":true,"adhoc_filters":[],"table_timestamp_format":"smart_date","color_pn":true,"show_cell_bars":true}
When loading from datasets list, /explore_json/ gives 400 Bad Request.
Superset version == 0.37.1, Python version == 3.8
Superset saves the details/metadata of the table that has to be connected. So, in that my table had a very long datatype as you can see in the image in question. Superset saves that as a varchar of length 32. So, the database was not allowing to enter this value into the database. Which was causing the error. Due to that no records were being fetched even after adding the table in the datasources.
What I did was to increase the length of the column datatype.
ALTER TABLE table_columns MODIFY type varchar(200)

Python getting results from Azure Storage Table with azure-data-tables

I am trying to query an Azure storage table to get all rows to turn into a table on a web site, however I cannot get the entries from the table, I get the same error every time "azure.core.exceptions.HttpResponseError: The requested operation is not implemented on the specified resource."
For code I am following the examples here and it is not working as expected.
from azure.data.tables import TableServiceClient
from azure.core.credentials import AzureNamedKeyCredential
def read_storage_table():
credential = AzureNamedKeyCredential(os.environ["AZ_STORAGE_ACCOUNT"], os.environ["AZ_STORAGE_KEY"])
service = TableServiceClient(endpoint=os.environ["AZ_STORAGE_ENDPOINT"], credential=credential)
client = service.get_table_client(table_name=os.environ["AZ_STORAGE_TABLE"])
entities = client.query_entities(query_filter="PartitionKey eq 'tasksSeattle'")
client.close()
service.close()
return entities
Then calling the function.
table = read_storage_table()
for record in table:
for key in record.keys():
print("Key: {}, Value: {}".format(key, record[key]))
And that returns:
Traceback (most recent call last):
File "C:\Program Files\Python310\Lib\site-packages\azure\data\tables\_models.py", line 363, in _get_next_cb
return self._command(
File "C:\Program Files\Python310\Lib\site-packages\azure\data\tables\_generated\operations\_table_operations.py", line 386, in query_entities
raise HttpResponseError(response=response, model=error)
azure.core.exceptions.HttpResponseError: Operation returned an invalid status 'Not Implemented'
Content: {"odata.error":{"code":"NotImplemented","message":{"lang":"en-US","value":"The requested operation is not implemented on the specified resource.\nRequestId:cd29feda-1002-006b-679c-3d39e8000000\nTime:2022-03-22T03:27:00.5993216Z"}}}
Using a similar function I am able to write to the table. But even trying entities = client.list_entities() I get the same error. I'm at a loss.
KrunkFu thank you for identifying and sharing the solution here. Posting the same into answer section to help other community members.
replacing https://<accountname>.table.core.windows.net/<table>, with
https://<accountname>.table.core.windows.net to the query solved the
issue

Trying to take a date as input in python and updating value in mysql table using mysql connector

My aim here is to update an existing table in mysql database taking all the variables for the update statement as input from the user. I am using mysqlconnector to interface with python.
coltbu=input("Now enter the name of the column whose value for that row/condition is to be updated")
if coltbu=="dob":
y1=input("Enter the value to be set for date of joining in yyyy-mm-dd")
elif col=="id":
nval=int(input("Enter the value to be set for salary"))
else:
nval=input("Enter the value to be set for this column")
x='%d-%m-%Y'
tup=(coltbu,y1,x,col,val)
print(tup)
cursor1.execute("Update student set %s=str_to_date(%s,%s) where %s=%s"%tup)
con1.commit()
con1.close()
I have lots of variations after searching for solutions painstakingly on the internet, but I can't seem to find anything that works. Some error or the other always shows up
In this case the table I am using is
image of table
The command I am trying to execute is
update student set dob='2003-09-12' where id=6;
I also tried to use datetime for this to work.
coltbu=input("Now enter the name of the column whose value for that row/condition is to be updated")
if coltbu=="dob":
y=int(input("Enter the value to be set for dob in yyyy/mm/dd. First enter year and hit enter"))
m=int(input("Now enter month"))
d=int(input("Now enter date"))
nval=datetime.date(datetime(y,m,d))
elif col=="id":
nval=int(input("Enter the value to be set for id"))
else:
nval=input("Enter the value to be set for this column")
tup=(int(coltbu,nval,col,val)
print(tup)
cursor1.execute("Update student set %s=%s where %s=%s"%tup)
con1.commit()
con1.close()
But this threw a really weird error. It would say
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mysql/connector/connection_cext.py", line 489, in cmd_query
raw_as_string=raw_as_string)
_mysql_connector.MySQLInterfaceError: Incorrect date value: '1993' for column 'dob' at row 4
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/aliziamojiz/Documents/Alizia/12/prac1.py", line 118, in <module>
f4()
File "/Users/aliziamojiz/Documents/Alizia/12/prac1.py", line 113, in f4
cursor1.execute("Update student set %s=%s where %s=%s"%tup)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mysql/connector/cursor_cext.py", line 266, in execute
raw_as_string=self._raw_as_string)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mysql/connector/connection_cext.py", line 492, in cmd_query
sqlstate=exc.sqlstate)
mysql.connector.errors.DataError: 1292 (22007): Incorrect date value: '1993' for column 'dob' at row 4
whereas I had not entered 1993 anywhere either in my code or the input I gave.
I am stuck now. How do I take a date as input in a python program and use it in an update command for a mysql table.
Please help out.
Thanks in advance :)
The way you're using datetime to obtain a date seems to be odd. strptime will throw a ValueError if the format of the date is not as expected or if the date is invalid. It's usually used like so:
dateInput = input("Enter the value to be set for dob in the format yyyy/mm/dd: ")
try:
dateObject = datetime.strptime(dateInput, "%Y/%m/%d")
except ValueError:
raise ValueError("WRONG DATE FORMAT!")
And afterwards, you can use the input date to update your table with dateObject.date().
Additionally, I also suggest you separate the values in the SQL query (to prevent SQL injection) while using execute like this:
queryValues = (str(dateObject.date()). idInput)
cursor1.execute("UPDATE student SET dob=%s WHERE id=%s", queryValues)

not enough arguments for format string python mysql

I have a trouble with my program. I want input database from file txt. This is my source code
import MySQLdb
import csv
db=MySQLdb.connect(user='root',passwd='toor',
host='127.0.0.1',db='data')
cursor=db.cursor()
csv_data=csv.reader(file('test.txt'))
for row in csv_data:
sql = "insert into `name` (`id`,`Name`,`PoB`,`DoB`) values(%s,%s,%s,%s);"
cursor.execute(sql,row)
db.commit()
cursor.close()
After run that program, here the error
Traceback (most recent call last):
File "zzz.py", line 9, in <module>
cursor.execute(sql,row)
File "/home/tux/.local/lib/python2.7/site-packages/MySQLdb/cursors.py", line 187, in execute
query = query % tuple([db.literal(item) for item in args])
TypeError: not enough arguments for format string
and this is my test.txt
4
zzzz
sby
2017-10-10
Please help, and thanks in advance.
Now that you have posted the CSV file, the error should now be obvious to you - each line contains only one field, not the four that the SQL statement requires.
If that is the real format of your data file, it is not CSV data. Instead you need to read each group of four lines as one record, something like this might work:
LINES_PER_RECORD = 4
SQL = 'insert into `name` (`id`,`Name`,`PoB`,`DoB`) values (%s,%s,%s,%s)'
with open('test.txt') as f:
while True:
try:
record = [next(f).strip() for i in range(LINES_PER_RECORD)]
cursor.execute(SQL, record)
except StopIteration:
# insufficient lines available for record, treat as end of file
break
db.commit()

SQLite error in Python

I have a simple piece of code to update a row in sqlite:
def UpdateElement(new_user,new_topic):
querycurs.execute('''INSERT into First_Data (topic) values(?) WHERE user = (?)''',([new_topic], [new_user]))
However, this gives me the error:
Traceback (most recent call last):
File "C:/Python27/Database.py", line 40, in <module>
UpdateElement("Abhishek Mitra","Particle Physics")
File "C:/Python27/Database.py", line 36, in UpdateElement
querycurs.execute('''INSERT into First_Data (topic) values(?) WHERE user = (?)''',([new_topic],[new_user]))
OperationalError: near "WHERE": syntax error
You should be using an UPDATE statement instead of INSERT:
def UpdateElement(new_user,new_topic):
querycurs.execute('''UPDATE First_Data
SET topic = ?
WHERE user = ?''', (new_topic, new_user))
The problem arises from the use of the parentheses and sending in new_user as an array, I believe. Values is an array, user is not.
You want something like:
cur.execute("UPDATE table SET value=? WHERE name=?", (myvalue, myname))
But yes, UPDATE sounds like what you wanted in the first place.

Categories