I created a table by following query:
with sqlite3.connect('example.db', detect_types=sqlite3.PARSE_DECLTYPES) as conn:
cur = conn.cursor()
cur.execute("CREATE TABLE Surveys(Id INTEGER PRIMARY KEY, Name TEXT, Desc TEXT, DictObject BLOB, Hash TEXT)")
conn.commit()
Now I have to add some Survey data to Surveys table for every request. Surveys table has Id as primary integer value. This has to be increased upon every insertion - And What is the proper way to do it? Do I have to fetch every row and check what the lastIdis upon every request?
sqlite will automatically provide an id for a INTEGER PRIMARY KEY column on INSERT on a table if you do not provide a value yourself. Just insert data for every column except for the id column.
with sqlite3.connect('example.db', detect_types=sqlite3.PARSE_DECLTYPES) as conn:
cur = conn.cursor()
cur.execute("INSERT INTO Surveys(Name, Desc, DictObject, Hash) VALUES (?, ?, ?, ?",
('somename', 'some description\nof sorts\n',
"{'dump': 'of a dictionary'}", '0xhash'))
You may want to add the keyword AUTOINCREMENT to your id column though; the default is to pick the highest existing row id plus 1, and if you delete data from the table on a regular basis that can lead to reuse of ids (delete the current highest id and it'll be re-used next time round). AUTOINCREMENT guarantees that each generated number will only be used once for a table, independent of deletes.
Note that when you use a sqlite3 connection as a context manager (using the with statement), the commit() is automatically applied for you. Quoting the documentation:
Connection objects can be used as context managers that automatically commit or rollback transactions. In the event of an exception, the transaction is rolled back; otherwise, the transaction is committed.
In other words, you can safely remove the conn.commit() line in your code, it is entirely redundant.
Related
I am trying to build a GUI CRUD app in python where a user enters an object; say an apple the amount of objects (10) and the date at which they conducted their research could be today or yesterday etc (29/03/2021) in this format.
This data then gets sent to a sqlite3 database so reports can be run.
When implementing and the python file the sqlite database contains all the information added bar the ID which should be autoincremented, instead it shows NULL.
import sqlite3
class Database:
def __init__(self, db):
self.conn = sqlite3.connect(db)
self.cur = self.conn.cursor()
self.cur.execute("CREATE TABLE IF NOT EXISTS errors (ID INTEGER PRIMARY KEY AUTOINCREMENT,
Subject text, Total integer, Date text)")
self.conn.commit()
def insert(self, subject, total, date):
self.cur.execute("INSERT INTO errors VALUES (NULL, ?, ?, ?)", (subject, total, date))
self.conn.commit()
So basically my ID column is not incrementing and is saying NULL. I have tried removing "AUTOINCREMENT" aswell as some say it is not necessary with PRIMARY KEY PRESENT but still doesn't work.
Well yes, autoincrement ids only works when sqlite is creating the value. Here you're giving it NULL explicitely so it does as you ask and uses NULL.
If you want the default behaviour, just don't provide a value for that column:
insert into errors (subject, total, date) values (?, ?, ?)
unlike postgres, sqlite apparently doesn't support DEFAULT pseudo-expressions, so that seems to not be an option.
Incidentally, the AUTOINCREMENT is probably why sqlite doesn't error on NULL:
According to the SQL standard, PRIMARY KEY should always imply NOT NULL. Unfortunately, due to a bug in some early versions, this is not the case in SQLite. Unless the column is an INTEGER PRIMARY KEY or the table is a WITHOUT ROWID table or the column is declared NOT NULL, SQLite allows NULL values in a PRIMARY KEY column.
meaning a column which is strictly declared as INTEGER PRIMARY KEY should implicitly reject NULL values, as it will make the column an alias / replacement for the implicit ROWID.
There are 6 columns and for some reason when my program gets to this bit of code during install, it simply creates a blank file with no table.
Through trial and error, I found the only thing that did not create a blank file was removing the limit row.
I have other code that runs and looks the same just for different databases and it works fine.
try:
# Connect to Database
conn = sqlite3.connect('databases/Categories.db')
cur = conn.cursor()
# Create Table
cur.execute("""CREATE TABLE categories (
priority text,
name text,
type text,
increment text,
total real,
limit real)""")
# Commit and Close
conn.commit()
conn.close()
except sqlite3.OperationalError:
pass
"limit" is an SQL keyword, for example, as in
SELECT foo
FROM bar
LIMIT 10;
If you want to use "limit" as a column name in sqlite, it needs to be quoted, in one of these ways:
'limit'
"limit"
[limit]
`limit`
So for example, your statement could be
cur.execute("""CREATE TABLE categories (
priority text,
name text,
type text,
increment text,
total real,
"limit" real)""")
Note that it must be quoted in other statements too, for example
"""INSERT INTO categories ("limit") VALUES (?);"""
I did some more testing and I fixed it by renaming the limit row to something else. Turns out, sqlite3 doesn't like rows named limit.
Abstract question: When importing a record (= table row) from some programs to a database, I sometimes want to only import the record if and only if the record isn't already present in the database, based on some unique identifier. Is it preferable to:
first query the database to check whether the record with the same identifier is already present in the database, and if not, insert the record,
or add some unicity constraint in the database, then try to insert the record, surrounded by some try ... catch
?
Concrete example: I have a data file containing a list of users. For each user I have a first name and last name. I assume that two people may not have the same name. I want to write a Python script that inserts the data in a PostgreSQL table, whose columns are userid (auto-incremented primary key), first_name (text), and last_name (text).
I want to insert the user John Doe. Is it preferable to:
first query the database to check whether the record with the same record is already present in the database, and if not, insert the record, e.g.:
sql = "SELECT COUNT(*) cnt FROM users WHERE first_name = 'John' AND last_name`= 'Doe'"
data = pd.read_sql_query(sql, connection)
if data['cnt'][0]==0:
sql = "INSERT INTO users (first_name, last_name) values ('John','Doe')"
cursor.execute(sql)
connection.commit()
or add some unicity constraint in the database (on (first_name, last_name)), then try to insert the record, surrounded by some try … catch, e.g."
try:
sql = "INSERT INTO users (first_name, last_name) values ('John','Doe')"
cursor.execute(sql)
connection.commit()
except:
print('The user is already present in database')
?
Creating my table:
cursor.execute("""
CREATE TABLE if not exists intraday_quote (
id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
symbol VARCHAR(10) NOT NULL,
date DATE,
time DATE,
open FLOAT,
high FLOAT,
low FLOAT,
close FLOAT,
volume INTEGER);
""")
and I`m trying to insert this:
conn = sqlite3.connect('intraday_quote.db')
cursor = conn.cursor()
# Prepare SQL query to INSERT a record into the database.
sql = """INSERT INTO intraday_quote(symbol) VALUES ('Mac123432')"""
cursor.execute(sql)
No insertion happened in the database. What I am missing?
You need to commit your changes so they can get into effect in database.
commit all db operations like update, insert.
cursor.commit()
after your execute is succeeded. You can get return of the cursor.execute. If it is not None then you can try committing the changes else use rollback(exercise for you :) ) so you wont end up with wrong data updated in database.
You need to do conn.commit() to see the changes in the database. Quoting the documentation
This method commits the current transaction. If you don’t call this method, anything you did since the last call to commit() is not visible from other database connections. If you wonder why you don’t see the data you’ve written to the database, please check you didn’t forget to call this method.
So I'm doing a lot of inserts which only I want to insert into a certain table when the name doesn't "exist" in the table yet, i.e. I don't want to have any duplicates. I'm approaching it this way now:
def create_artist(artist_name):
artistid = has_artist(artist_name)
if not artistid:
sql['cursor'].execute("INSERT INTO artists VALUES (NULL, ?)", (artist_name,))
artistid = has_artist(artist_name)
return artistid[0]
def has_artist(artist_name):
sql['cursor'].execute("SELECT id FROM artists WHERE artist_name = ?", (artist_name,))
return (sql['cursor'].fetchone())
It basically looks up if there is an artist with the same name in the table, if not, it inserts one and else it just returns the lookup. There has to be a better way of doing this, is it possible to move this whole process into a query so I will be able to move this all to SQL?
Look into INSERT IGNORE. This will require you to have a UNIQUE index on your table that will cause the IGNORE to trigger.
INSERT IGNORE INTO artists VALUES (NULL, ?)