Create a new SQLite table in python with for-loop - python

Say I have 100 different integers I want to store like a row with 100 columns.
I am trying it like this:
db = sqlite3.connect("test.db")
c = db.cursor()
c.execute('''
CREATE TABLE IF NOT EXISTS nums(
id INTEGER PRIMARY KEY,
''')
for i in range(100):
c.execute('''
ALTER TABLE nums
ADD ''' + 'column_' + i + '''INTEGER''')
db.commit()
Someone told me that when you are using numbers as column names you could probably do it a better way. But if I for example have a list with strings in python, and I want to loop through them and store every individual string in its own column, the approach would be the same, right?
However, this code runs without errors for me, but no new table is created, how come?

Your ALTER statement is incorrect as it's missing the COLUMN after ADD. You can use the following:
for i in range(100):
c.execute(f'ALTER TABLE nums ADD COLUMN column_{i} INTEGER')

Related

How do I pass variables in SQL3 python? [duplicate]

I create a table with primary key and autoincrement.
with open('RAND.xml', "rb") as f, sqlite3.connect("race.db") as connection:
c = connection.cursor()
c.execute(
"""CREATE TABLE IF NOT EXISTS race(RaceID INTEGER PRIMARY KEY AUTOINCREMENT,R_Number INT, R_KEY INT,\
R_NAME TEXT, R_AGE INT, R_DIST TEXT, R_CLASS, M_ID INT)""")
I want to then insert a tuple which of course has 1 less number than the total columns because the first is autoincrement.
sql_data = tuple(b)
c.executemany('insert into race values(?,?,?,?,?,?,?)', b)
How do I stop this error.
sqlite3.OperationalError: table race has 8 columns but 7 values were supplied
It's extremely bad practice to assume a specific ordering on the columns. Some DBA might come along and modify the table, breaking your SQL statements. Secondly, an autoincrement value will only be used if you don't specify a value for the field in your INSERT statement - if you give a value, that value will be stored in the new row.
If you amend the code to read
c.executemany('''insert into
race(R_number, R_KEY, R_NAME, R_AGE, R_DIST, R_CLASS, M_ID)
values(?,?,?,?,?,?,?)''',
sql_data)
you should find that everything works as expected.
From the SQLite documentation:
If the column-name list after table-name is omitted then the number of values inserted into each row must be the same as the number of columns in the table.
RaceID is a column in the table, so it is expected to be present when you're doing an INSERT without explicitly naming the columns. You can get the desired behavior (assign RaceID the next autoincrement value) by passing an SQLite NULL value in that column, which in Python is None:
sql_data = tuple((None,) + a for a in b)
c.executemany('insert into race values(?,?,?,?,?,?,?,?)', sql_data)
The above assumes b is a sequence of sequences of parameters for your executemany statement and attempts to prepend None to each sub-sequence. Modify as necessary for your code.

How to solve Incorrect number of bindings supplied. The current statement uses 1, and there are 2 supplied on Delete and Excutemany? [duplicate]

Say I have a list of following values:
listA = [1,2,3,4,5,6,7,8,9,10]
I want to put each value of this list in a column named formatteddate in my SQLite database using executemany command rather than loop through the entire list and inserting each value separately.
I know how to do it if I had multiple columns of data to insert. For instance, if I had to insert listA,listB,listC then I could create a tuple like (listA[i],listB[i],listC[i]). Is it possible to insert one list of values without a loop. Also assume the insert values are integers.
UPDATE:
Based on the answer provided I tried the following code:
def excutemanySQLCodewithTask(sqlcommand,task,databasefilename):
# create a database connection
conn = create_connection(databasefilename)
with conn:
cur = conn.cursor()
cur.executemany(sqlcommand,[(i,) for i in task])
return cur.lastrowid
tempStorage = [19750328, 19750330, 19750401, 19750402, 19750404, 19750406, 19751024, 19751025, 19751028, 19751030]
excutemanySQLCodewithTask("""UPDATE myTable SET formatteddate = (?) ;""",tempStorage,databasefilename)
It still takes too long (roughly 10 hours). I have 150,000 items in tempStorage. I tried INSERT INTO and that was slow as well. It seems like it isn't possible to make a list of tuple of integers.
As you say, you need a list of tuples. So you can do:
cursor.executemany("INSERT INTO my_table VALUES (?)", [(a,) for a in listA])

IntegrityError: NOT NULL constraint failed PYTHON, SQLite3

Im trying to store my json file to the popNames DB but this error pops up.
My Json file is a dictionary with the country being the key and the person names as key_value. In my DB I want to put the country as the first element as a primary and the names in the subsequent column in the db table
Could anyone help me with this?
enter image description here
Every INSERT call creates a new row in the PopNamesDB table. Your code creates many such rows: the first row has a country but NULL for all the other columns. The next N rows each have a null country, a value for colName, and NULL for all the other columns.
An easy way to fix your code is to change your followup INSERT calls (on line 109) to change the row you created earlier, instead of creating new rows. The query will look something like
cur.execute(''' UPDATE PopNamesDB SET ''' + colName + ''' = ? WHERE country = ?''', (y, c))

SQL insert query through python loop

I'm trying to insert multiple rows into a table using a for-loop in python using the following code:
ID = 0
values = ['a', 'b', 'c']
for x in values:
database.execute("INSERT INTO table (ID, value) VALUES (:ID, :value)",
ID = ID, value = x)
ID += 1
What I'd expected to happen was that this piece of code would insert three rows into my table. The only problem is that it only executes the query once. So I'd only get the row " 0, 'a' ".
There aren't any error messages popping up, it just doesn't update the table with the other two values. Weirdly enough however, I can circumvent this problem by using multiple queries, like so:
ID = 0
values = ['a', 'b', 'c']
for x in values:
database.execute("INSERT INTO table (ID) VALUES (:ID)", ID = ID)
database.execute("INSERT INTO table (value) VALUES (:value)", value = x)
ID += 1
While this updates my code, this method becomes more tedious as I add columns to my table further down the line. Does anyone know why the first snippet of code doesn't work and the second one does?
The execute method takes an array as the second parameter.
execute(sql[, parameters])
Executes an SQL statement. The SQL statement may be parameterized (i. e. placeholders instead of SQL literals). The sqlite3 module
supports two kinds of placeholders: question marks (qmark style) and
named placeholders (named style).
This should work:
database.execute("INSERT INTO table (ID, value) VALUES (:ID, :value)", [ID , x])
You might want to investigte executemany while you're in the doc.
From the same doc:
commit()
This method commits the current transaction. If you don’t call this method, anything you did since the last call to commit() is not
visible from other database connections. If you wonder why you don’t
see the data you’ve written to the database, please check you didn’t
forget to call this method.
You might want to investigte executemany while you're in the doc.

python sql statement optimalization in python

I found the only way how to update only null variables in mysql db with python.
I have this kind of statement:
sql = "INSERT INTO `table` VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s)\
ON DUPLICATE KEY UPDATE Data_block_1_HC1_sec_voltage=IF(VALUES(Data_block_1_HC1_sec_voltage)IS NULL,Data_block_1_HC1_sec_voltage,VALUES(Data_block_1_HC1_sec_voltage)),\
`Data_block_1_TC1_1`=IF(VALUES(`Data_block_1_TC1_1`)IS NULL,`Data_block_1_TC1_1`,VALUES(`Data_block_1_TC1_1`)),\
`Data_block_1_TC1_2`=IF(VALUES(`Data_block_1_TC1_2`)IS NULL,`Data_block_1_TC1_2`,VALUES(`Data_block_1_TC1_2`)),\
`Data_block_1_TCF1_1`=IF(VALUES(`Data_block_1_TCF1_1`)IS NULL,`Data_block_1_TCF1_1`,VALUES(`Data_block_1_TCF1_1`)),\
`HC1_HC1_output`=IF(VALUES(`HC1_HC1_output`)IS NULL,`HC1_HC1_output`,VALUES(`HC1_HC1_output`)),\
`Data_block_1_HC1_sec_cur`=IF(VALUES(`Data_block_1_HC1_sec_cur`)IS NULL,`Data_block_1_HC1_sec_cur`,VALUES(`Data_block_1_HC1_sec_cur`)),\
`Data_block_1_HC1_power`=IF(VALUES(`Data_block_1_HC1_power`)IS NULL,`Data_block_1_HC1_power`,VALUES(`Data_block_1_HC1_power`)),\
`HC1_HC1_setpoint`=IF(VALUES(`HC1_HC1_setpoint`)IS NULL,`HC1_HC1_setpoint`,VALUES(`HC1_HC1_setpoint`))\
"
Datablocks are columns in db. Primary key is datetime. Right now there are 8 columns but I will have a lot more variables (more columns). I am not reallz good at python but I dont like the statement because its kind of hardcoded. Could I make this statement somehow in a for cycle or something so It doesnt have to be so long and I dont have to write all the variables manually?
Thx for your help
Let's assume that your column names are stored in an array cols. Then in order to generate the "interesting" inner part of the SQL statement above, you could do
',\\\n'.join(map(lambda c: r'`%(col)s` = IF(VALUES(`%(col)s`) IS NULL, `%(col)s`, VALUES(`%(col)s`))' % {'col': c}, cols))
Here, map generates for each element of cols the corresponding line of the SQL statement and join then stitches everything together.

Categories