I have this department table which has an array int column.
create table department(id int,name text, emp_id int[])
I'm getting this error for the below statement while inserting data via Python.
my_cursor.execute(
"""
insert into department
select 10, 'QC', array[34,62,Null]
"""
)
The error is
psycopg2.errors.NullValueNotAllowed: array must not contain nulls
But I can insert a Null value in array and can run this SQL insert statement in SQL Client or PG Admin
insert into department
select 10, 'QC', array[34,62,Null]
I think this error is coming from the python module, as the DB allows inserting Null values in SQL statement.
How can we insert Null values in emp_id column through Python?
I create a table with primary key and autoincrement.
with open('RAND.xml', "rb") as f, sqlite3.connect("race.db") as connection:
c = connection.cursor()
c.execute(
"""CREATE TABLE IF NOT EXISTS race(RaceID INTEGER PRIMARY KEY AUTOINCREMENT,R_Number INT, R_KEY INT,\
R_NAME TEXT, R_AGE INT, R_DIST TEXT, R_CLASS, M_ID INT)""")
I want to then insert a tuple which of course has 1 less number than the total columns because the first is autoincrement.
sql_data = tuple(b)
c.executemany('insert into race values(?,?,?,?,?,?,?)', b)
How do I stop this error.
sqlite3.OperationalError: table race has 8 columns but 7 values were supplied
It's extremely bad practice to assume a specific ordering on the columns. Some DBA might come along and modify the table, breaking your SQL statements. Secondly, an autoincrement value will only be used if you don't specify a value for the field in your INSERT statement - if you give a value, that value will be stored in the new row.
If you amend the code to read
c.executemany('''insert into
race(R_number, R_KEY, R_NAME, R_AGE, R_DIST, R_CLASS, M_ID)
values(?,?,?,?,?,?,?)''',
sql_data)
you should find that everything works as expected.
From the SQLite documentation:
If the column-name list after table-name is omitted then the number of values inserted into each row must be the same as the number of columns in the table.
RaceID is a column in the table, so it is expected to be present when you're doing an INSERT without explicitly naming the columns. You can get the desired behavior (assign RaceID the next autoincrement value) by passing an SQLite NULL value in that column, which in Python is None:
sql_data = tuple((None,) + a for a in b)
c.executemany('insert into race values(?,?,?,?,?,?,?,?)', sql_data)
The above assumes b is a sequence of sequences of parameters for your executemany statement and attempts to prepend None to each sub-sequence. Modify as necessary for your code.
Suppose I have a table MyTable where the primary key is ID and a composite unique key is ColA and ColB.
I want to retrieve the ID affected by an UPDATE statement like this:
UPDATE MyTable
SET ColC='Blah'
WHERE ColA='xxx' and ColB='yyy'
Is there any way to do this using sqlite3 in python3 in a single statement without doing another SELECT after a successful UPDATE? I'm aware of lastrowid attribute on a cursor, but it seems to only apply to INSERTs.
More generally, I'm curious if any SQL engine allows for such functionality.
You asked if it could be done in some other DBMS, so I found this method in MySQL:
UPDATE MyTable as m1
JOIN (SELECT #id := id AS id
FROM MyTable
WHERE ColA = 'xxx' AND ColB = 'yyy') AS m2
ON m1.id = m2.id
SET m1.ColC = 'Blah';
After this you can do SELECT #id to get the ID of the updated row.
I have a list of IDs [1,5,8,...]. I also have a table (TableA) that has 100 rows (which matches my list of IDs exactly). I have another table (TableB) that has tons of rows (2000) and I want to update one column (True/False) based on if the Primary Key exists in TableA's primary keys (or my python list of IDs - which is the same).
Currently I loop through my list of IDs and just do a simply update statement (below is python code):
for id in ID_List:
cur.execute('update TableB set "Column1"=%s where "ID"=%s', (False,id))
This works fine - but I am curious if there is a single line code I could use rather than a loop. Something like:
cur.execute('update TableB set "Column1"=False where "ID" in ID_List')
or
cur.execute('update TableB set "Column1"=False where "ID" in TableA.keys()'
and all the rows in the ID_List would update quickly. I can't use ">" or "<" because the IDs might not all be greater than or smaller than a specific number. I may want to change IDs (3,6,9) but not (4,7,8).
You could try something like this:
update TableB set "Column1"=False where TableB."ID" in (select TableA."ID" from TableA)
I have two tables with a common field I want to find all the the
items(user_id's) which present in first table but not in second.
Table1(user_id,...)
Table2(userid,...)
user_id in and userid in frist and second table are the same.
session.query(Table1.user_id).outerjoin(Table2).filter(Table2.user_id == None)
This is untested as I'm still new to SQLAlchemy, but I think it should push you in the right direction:
table2 = session.query(Table2.user_id).subquery()
result = session.query(Table1).filter(Table1.user_id.notin_(table2))
my guess is this type of approach would result in the following SQL:
SELECT table1.* FROM table1 WHERE table1.user_id NOT IN (SELECT table2.user_id FROM table2)