I have to create a table dynimically in sqllite with colums=['col1,'col2','col3']
so I am using:
create_query = "create table if not exists myTable1 ({0})".format(" text,".join(columns))
print(create_query)
This works.
But I have to keep "primary key" for a 3nd entry in List (or)1st entry in a List.
CREATE TABLE test (col1 primary key ,col2,col3);
Is there any condition option we can use.
Basically I dont wan just the entries but want to add primary key also.
I tried different combinations with no luck
Okay, The best way would be to add an inline for loop with an if condition to cycle through the values. To this we can add a variable column_no that will store the index of the column value to use as a primary key.
Here's what the resultant code would look like:
columns = ['col1','col2','col3']
column_no = 0
create_query = "create table if not exists myTable1 ({0})".format(' text, '.join(val+(" primary key" if i==column_no else "") for (i, val) in enumerate(columns)))
print(create_query)
Related
I create a table with primary key and autoincrement.
with open('RAND.xml', "rb") as f, sqlite3.connect("race.db") as connection:
c = connection.cursor()
c.execute(
"""CREATE TABLE IF NOT EXISTS race(RaceID INTEGER PRIMARY KEY AUTOINCREMENT,R_Number INT, R_KEY INT,\
R_NAME TEXT, R_AGE INT, R_DIST TEXT, R_CLASS, M_ID INT)""")
I want to then insert a tuple which of course has 1 less number than the total columns because the first is autoincrement.
sql_data = tuple(b)
c.executemany('insert into race values(?,?,?,?,?,?,?)', b)
How do I stop this error.
sqlite3.OperationalError: table race has 8 columns but 7 values were supplied
It's extremely bad practice to assume a specific ordering on the columns. Some DBA might come along and modify the table, breaking your SQL statements. Secondly, an autoincrement value will only be used if you don't specify a value for the field in your INSERT statement - if you give a value, that value will be stored in the new row.
If you amend the code to read
c.executemany('''insert into
race(R_number, R_KEY, R_NAME, R_AGE, R_DIST, R_CLASS, M_ID)
values(?,?,?,?,?,?,?)''',
sql_data)
you should find that everything works as expected.
From the SQLite documentation:
If the column-name list after table-name is omitted then the number of values inserted into each row must be the same as the number of columns in the table.
RaceID is a column in the table, so it is expected to be present when you're doing an INSERT without explicitly naming the columns. You can get the desired behavior (assign RaceID the next autoincrement value) by passing an SQLite NULL value in that column, which in Python is None:
sql_data = tuple((None,) + a for a in b)
c.executemany('insert into race values(?,?,?,?,?,?,?,?)', sql_data)
The above assumes b is a sequence of sequences of parameters for your executemany statement and attempts to prepend None to each sub-sequence. Modify as necessary for your code.
Say I have 100 different integers I want to store like a row with 100 columns.
I am trying it like this:
db = sqlite3.connect("test.db")
c = db.cursor()
c.execute('''
CREATE TABLE IF NOT EXISTS nums(
id INTEGER PRIMARY KEY,
''')
for i in range(100):
c.execute('''
ALTER TABLE nums
ADD ''' + 'column_' + i + '''INTEGER''')
db.commit()
Someone told me that when you are using numbers as column names you could probably do it a better way. But if I for example have a list with strings in python, and I want to loop through them and store every individual string in its own column, the approach would be the same, right?
However, this code runs without errors for me, but no new table is created, how come?
Your ALTER statement is incorrect as it's missing the COLUMN after ADD. You can use the following:
for i in range(100):
c.execute(f'ALTER TABLE nums ADD COLUMN column_{i} INTEGER')
Im trying to store my json file to the popNames DB but this error pops up.
My Json file is a dictionary with the country being the key and the person names as key_value. In my DB I want to put the country as the first element as a primary and the names in the subsequent column in the db table
Could anyone help me with this?
enter image description here
Every INSERT call creates a new row in the PopNamesDB table. Your code creates many such rows: the first row has a country but NULL for all the other columns. The next N rows each have a null country, a value for colName, and NULL for all the other columns.
An easy way to fix your code is to change your followup INSERT calls (on line 109) to change the row you created earlier, instead of creating new rows. The query will look something like
cur.execute(''' UPDATE PopNamesDB SET ''' + colName + ''' = ? WHERE country = ?''', (y, c))
I have a list of IDs [1,5,8,...]. I also have a table (TableA) that has 100 rows (which matches my list of IDs exactly). I have another table (TableB) that has tons of rows (2000) and I want to update one column (True/False) based on if the Primary Key exists in TableA's primary keys (or my python list of IDs - which is the same).
Currently I loop through my list of IDs and just do a simply update statement (below is python code):
for id in ID_List:
cur.execute('update TableB set "Column1"=%s where "ID"=%s', (False,id))
This works fine - but I am curious if there is a single line code I could use rather than a loop. Something like:
cur.execute('update TableB set "Column1"=False where "ID" in ID_List')
or
cur.execute('update TableB set "Column1"=False where "ID" in TableA.keys()'
and all the rows in the ID_List would update quickly. I can't use ">" or "<" because the IDs might not all be greater than or smaller than a specific number. I may want to change IDs (3,6,9) but not (4,7,8).
You could try something like this:
update TableB set "Column1"=False where TableB."ID" in (select TableA."ID" from TableA)
I have a database where i store some values with a auto generated index key. I also have a n:m mapping table like this:
create table data(id int not null identity(1,1), col1 int not null, col2 varchar(256) not null);
create table otherdata(id int not null identity(1.1), value varchar(256) not null);
create table data_map(dataid int not null, otherdataid int not null);
every day the data table needs to be updated with a list of new values, where a lot of them are already present but needs to be inserted into the data_map (the key in otherdata is then generated, so in this table the data is always new).
one way of doing it would be to first try to insert all values, then select the generated id, then insert into data_map:
mydata = [] # list of tuples
cursor.executemany("if not exists (select * from data where col1 = %d and col2 = %d) insert into data (col1, col2) values (%d, %d)", mydata);
# now select the id's
# [...]
but that obviously is quite bad because i need to select all things without using the key and also i need to do the check without using the key, so i need indexed data first, otherwise everything is very slow.
my next approach was to use a hashfunction (like md5 or crc64) to generate my own hash over col1 and col2, to be able to insert all values without using a select and be able to use the indexed key when inserting missing values.
can this be optimized or is it the best thing i could do?
the amout of lines is >500k per change, where maybe ~20-50% will be already in the database.
timing wise it looks like that calculating the hashes is much faster than inserting data into the database.
As far as I concern, you use mysql.connector. If it is, when you run cursor.execute() you should not use %d types. Everything should be just %s and connector will do this job about type conversions