SQLite query using Python IS NOT NULL has no effect - python

I have a query which is supposed to let me input two keys, and find rows where there are matches between those two (of which there are only one for each pair in the database).
params = (searchword1, searchword2)
c.execute("SELECT primarykey, key FROM table WHERE key=? AND ? IS NOT NULL", params)
rows = c.fetchall()
print(rows)
The print statement here gives me every case where only the first condition is true (key=?/key=searchword1).
What am I doing wrong?
Additionally - what I want to do here is simply to verify whether the data entry exists or not based on the two parameters. Is there a simpler way to do this?

You cannot bind column names to a prepared statement in SQL, which would represent a security risk. However, in your case, the expression key = ? would never match if the value being bound is NULL, so you don't even need the check ? IS NOT NULL:
params = (searchword1,)
c.execute("SELECT primarykey, key FROM table WHERE key = ?", params)
rows = c.fetchall()
print(rows)

Related

How do I pass variables in SQL3 python? [duplicate]

I create a table with primary key and autoincrement.
with open('RAND.xml', "rb") as f, sqlite3.connect("race.db") as connection:
c = connection.cursor()
c.execute(
"""CREATE TABLE IF NOT EXISTS race(RaceID INTEGER PRIMARY KEY AUTOINCREMENT,R_Number INT, R_KEY INT,\
R_NAME TEXT, R_AGE INT, R_DIST TEXT, R_CLASS, M_ID INT)""")
I want to then insert a tuple which of course has 1 less number than the total columns because the first is autoincrement.
sql_data = tuple(b)
c.executemany('insert into race values(?,?,?,?,?,?,?)', b)
How do I stop this error.
sqlite3.OperationalError: table race has 8 columns but 7 values were supplied
It's extremely bad practice to assume a specific ordering on the columns. Some DBA might come along and modify the table, breaking your SQL statements. Secondly, an autoincrement value will only be used if you don't specify a value for the field in your INSERT statement - if you give a value, that value will be stored in the new row.
If you amend the code to read
c.executemany('''insert into
race(R_number, R_KEY, R_NAME, R_AGE, R_DIST, R_CLASS, M_ID)
values(?,?,?,?,?,?,?)''',
sql_data)
you should find that everything works as expected.
From the SQLite documentation:
If the column-name list after table-name is omitted then the number of values inserted into each row must be the same as the number of columns in the table.
RaceID is a column in the table, so it is expected to be present when you're doing an INSERT without explicitly naming the columns. You can get the desired behavior (assign RaceID the next autoincrement value) by passing an SQLite NULL value in that column, which in Python is None:
sql_data = tuple((None,) + a for a in b)
c.executemany('insert into race values(?,?,?,?,?,?,?,?)', sql_data)
The above assumes b is a sequence of sequences of parameters for your executemany statement and attempts to prepend None to each sub-sequence. Modify as necessary for your code.

retrieving a single value from sql table

I wish to retrieve a single value from this database I have created. For example, The user will select a Name from a drop down box (these names correspond to the name column in the database). The name chosen will be stored in a variable called name_value. I would like to know how to search the database for the name in name_value AND return ONLY the other text in the next column called Scientific, into another variable called new_name. I hope I explained that well?
connection = sqlite3.connect("Cw.db")
crsr = connection.cursor()
crsr.execute("""CREATE TABLE Names(
Name text,
Scientific text)""")
Inserting these values: (There is more but its unnecessary to add them all)
connection = sqlite3.connect("Cw.db")
crsr = connection.cursor()
crsr.execute("""INSERT INTO Names (Name, Scientific)
VALUES
('Human', 'Homo Sapien');""")
The SELECT statement in SQL can be used to query for rows with specific values, and to specify the columns to be returned.
In your case, the code would look something like this
stmt = """\
SELECT Scientific
FROM Names
WHERE Name = ?
LIMIT 1
"""
name = 'Human'
crsr.execute(stmt, (name,))
new_name = crsr.fetchone()[0]
A few points to note:
we use a ? in the SELECT statement as a placeholder for the value that we are querying for
we set LIMIT 1 in the SELECT statement to ensure that at most only one row is returned, since you want to assign the result to a single variable.
the value(s) passed to crsr.execute must be a tuple, even if there is only one value
the return value of crsr.fetchone is a tuple, even though we are only fetching one column.

Efficiently delete multiple records

I am trying to execute a delete statement that checks if the table has any SKU that exists in the SKU column of the dataframe. And if it does, it deletes the row. As I am using a for statement to iterate through the rows and check, it takes a long time to run the program for 6000 rows of data.
I used executemany() as it was faster than using a for loop for the delete statement, but I am finding it hard to find an alternative for checking values in the dataframe.
sname = input("Enter name: ")
cursor = mydb.cursor(prepared=True)
column = df["SKU"]
data=list([(sname, x) for x in column])
query="""DELETE FROM price_calculations1 WHERE Name=%s AND SKU=%s"""
cursor.executemany(query,data)
mydb.commit()
cursor.close()
Is there a more efficient code for achieving the same?
You could first use a GET id FROM price_calculations1 WHERE Name=%s AND SKU=%s
and then use a MYSQL WHILE loop to delete these ids without the need of a cursor, which seems to be more performant.
See: https://www.mssqltips.com/sqlservertip/6148/sql-server-loop-through-table-rows-without-cursor/
A WHILE loop without the previous get, might also work.
See: https://dev.mysql.com/doc/refman/8.0/en/while.html
Rather than looping, try to do all the work in a single call to the database (this guideline is often applicable when working with databases).
Given a list of name / sku pairs:
pairs = [(name1, sku1), (name2, sku2), ...]
create a query that identifies all the matching records and deletes them
base_query = """DELETE FROM t1.price_calculations1 t1
WHERE t1.id IN (
SELECT t2.id FROM price_calculations1 t2
WHERE {})
"""
# Build the WHERE clause criteria
criteria = "OR ".join(["(name = %s AND sku = %s)"] * len(pairs))
# Create the query
query = base_query.format(criteria)
# "Flatten" the value pairs
values = [i for j in pairs for i in j]
cursor.execute(query, values)
cursor.commit()

cannot insert None value in postgres using psycopg2

I have a database(postgresql) with more than 100 columns and rows. Some cells in the table are empty,I am using python for scripting so None value is placed in empty cells but it shows the following error when I try to insert into table.
" psycopg2.ProgrammingError: column "none" does not exist"
Am using psycopg2 as python-postgres interface............Any suggestions??
Thanks in advance......
Here is my code:-
list1=[None if str(x)=='nan' else x for x in list1];
cursor.execute("""INSERT INTO table VALUES %s""" %list1;
);
Do not use % string interpolation, use SQL parameters instead. The database adapter can handle None just fine, it just needs translating to NULL, but only when you use SQL parameters will that happen:
list1 = [(None,) if str(x)=='nan' else (x,) for x in list1]
cursor.executemany("""INSERT INTO table VALUES %s""", list1)
I am assuming that you are trying to insert multiple rows here. For that, you should use the cursor.executemany() method and pass in a list of rows to insert; each row is a tuple with one column here.
If list1 is just one value, then use:
param = list1[0]
if str(param) == 'nan':
param = None
cursor.execute("""INSERT INTO table VALUES %s""", (param,))
which is a little more explicit and readable.

Saving Tuples as blob data types in Sqlite3 in Python

I have a dictionary in python. They keys are tuples with varying size containing unicode characters and the values are just a single int number. I want to insert this dictionary into sqlite db with a 2 column table.
The first column is for the key values and the second column should have the corresponding int value. Why do I want to do this? well I have a very large dictionary and I used cPickle, even setting the protocol to 2. The size is still big and saving and loading this file takes a lot of time. So I decided to save it in db. This dictionary only loads once into memory at the beginning of program, so there is no extra operation.
Now the problem is that I want to save the tuples exactly as tuples (not strings), so whenever I load my table into memory, I can immediately build my dictionary with no problem.
Does anyone know how I can do this?
A couple of things. First, SQLite doesn't let you store Python data-structures directly. Second, I'm guessing you want to ability to query the value by the tuple key on demand, so you don't want to pickle and unpickle and then search the keys in the dict.
The problem is, you can't query with tuple and you can't break the tuple entries into their own columns because they are of varying sizes. If you must use SQLite, you pretty much have to concatenate the unicode characters in the tuple, possibly with a delimiter that is not 1 of the characters in the tuple values. Use that as a key, and store it into a column in SQLite as a primary key column.
def tuple2key(t, delimiter=u':'):
return delimiter.join(t)
import sqlite3
conn = sqlite3.connect('/path/to/your/db')
cur = conn.cursor()
cur.execute('''create table tab (k text primary key, value integer)''')
# store the dict into a table
for k, v in my_dict.iteritems():
cur.execute('''insert into tab values (?, ?)''', (tuple2key(k), v))
cur.commit()
# query the values
v = cur.execute(''' select value from tab where key = ? ''', tuple2key((u'a',u'b'))).fetchone()
It is possible to store tuples into sqlite db and to create indices on tuples. It needs some extra code to get it done.
Whether the storing of tuples into db an appropriate solution in this particular case is another issue (probably a two-key solution is better suited).
import sqlite3
import pickle
def adapt_tuple(tuple):
return pickle.dumps(tuple)
sqlite3.register_adapter(tuple, adapt_tuple) #cannot use pickle.dumps directly because of inadequate argument signature
sqlite3.register_converter("tuple", pickle.loads)
def collate_tuple(string1, string2):
return cmp(pickle.loads(string1), pickle.loads(string2))
con = sqlite3.connect(":memory:", detect_types=sqlite3.PARSE_DECLTYPES)
con.create_collation("cmptuple", collate_tuple)
cur = con.cursor()
cur.execute("create table test(p tuple unique collate cmptuple) ")
cur.execute("create index tuple_collated_index on test(p collate cmptuple)")
#insert
p = (1,2,3)
p1 = (1,2)
cur.execute("insert into test(p) values (?)", (p,))
cur.execute("insert into test(p) values (?)", (p1,))
#ordered select
cur.execute("select p from test order by p collate cmptuple")
I think it is better to create 3 columns in your table - key1, key2 and value.
If you prefer to save the key as a tuple, you can still use pickle but apply to the key only. Then you can save it as blob.
>>> pickle.dumps((u"\u20AC",u"\u20AC"))
'(V\\u20ac\np0\ng0\ntp1\n.'
>>> pickle.loads(_)
(u'\u20ac', u'\u20ac')
>>>

Categories