I am using SQLITE3.
In my SQL Table, I have a table of 50 columns, and I would like to put in each column each value of My_List, which has 50 elements.
Is there any way to code a loop in python to put my data in my table ? I tried to find it out but didn't get anything...
My current code for 3 variables instead of 50 is:
import sqlite3
conn = sqlite3.connect("testdatabase.db")
c.execute('''CREATE TABLE mytable (Column1 text, Column2 text, Column3,
text) ''')
c.execute('''INSERT INTO mytable (Column1, Column2, Column3) VALUES (?,
?, ?)''', (myliste[0], myliste[1], myliste[2])
conn.commit()
Thank you very much.
Lcs
I see what you are trying to do. You almost have it. What you have is writing one row of data. just put that into a loop and you can write the whole table:
import sqlite3
conn = sqlite3.connect("testdatabase.db")
conn.execute("CREATE TABLE mytable (Column1 text, Column2 text, Column3 text)")
mytable = [
('a', 'b', 'c'),
('d', 'e', 'f'),
]
for myliste in mytable:
conn.execute("""INSERT INTO
mytable (Column1, Column2, Column3)
VALUES (?, ?, ?)""",
myliste)
conn.commit()
Update
To create 50 columns, if you have a list of columns already, replace the variable columns below with your own:
conn = sqlite3.connect("testdatabase.db")
conn.execute('DROP TABLE IF EXISTS mytable')
# Create ['Column1', 'Column2', ..., 'Column50']
columns = ['Column%d' % n for n in range(1, 51)]
# Create 'Column1 TEXT, ... Column50 TEXT'
columns_declaration = ', '.join('%s TEXT' % c for c in columns)
conn.execute("CREATE TABLE mytable (%s)" % columns_declaration)
conn.commit()
I answered a similar questions in this post I recommended to create a csv file and then use a bulk insert instead of using insert into because row by row is really slow, and with this method you don't need to worry about the number of columns or rows. I did it for sql server but I am pretty sure it will work in sqlite.
In SQL, you can omit the named columns in INSERT INTO assuming every column is being appended and values include data for all columns aligned to same table order.
Then consider dynamically building the placeholders for paramterization:
placeholders = ', '.join(['?'] * 50)
c.execute('''INSERT INTO mytable VALUES ({})'''.format(placeholders), mylist)
Related
I have the following dataframe:
data = [['Alex', 182.2],['Bob', 183.2],['Clarke', 188.4], ['Kelly', NA]]
df = pd.DataFrame(data, columns = ['Name', 'Height'])
I have the following SQL Server table:
create table dbo.heights as (
name varchar(10),
height float
)
This is my code to upload the data to my table:
for index,row in df.iterrows():
cursor.execute('INSERT INTO dbo.heights(name, height) values (?, ?)', row.name, row.height)
cnxn.commit()
cursor.close()
cnxn.close()
I want to upload the dataframe into my SQL Server table, but it fails on the null value. I tried replacing the NA with an np.nan value and it still failed. I also tried changing the height column to an "object" and replacing the NA with None and that also failed.
Please use the following instead:
for index, row in df.iterrows():
query = "INSERT INTO dbo.heights(name, height) values (?, ?)"
data = [row.name, row.height]
cursor.execute(query, data)
cursor.commit()
Or use the following:
query = "INSERT INTO dbo.heights(name, height) values (?, ?)"
data = [row.name, row.height for index, row in df.iterrows()]
cursor.executemany(query, data)
cursor.commit()
You'll see your None values as None in Python and as NULL in your database.
I tried replacing the NA with an np.nan
Because in such case you have to first define dataframe schema and make it nullable float.
"By default, SeriesSchema/Column objects assume that values are not nullable. In order to accept null values, you need to explicitly specify nullable=True, or else you’ll get an error."
Further Reading
Try like this
for index,row in df.iterrows():
cursor.execute("INSERT INTO table (`name`, `height`) VALUES (%s, %s)", (row.name, row.height))
cnxn.commit()
cursor.close()
cnxn.close()
Insert in columns with parameterized query throws no such column error
First (working) example:
# unit test input
name = "issue_number_1"
text = "issue_text"
rating_sum = 0
if name:
# check if issue is already in db
with self.conn: # this should release the connection when finished
test = cursor.execute("SELECT name, text FROM issue WHERE name = ?", (name,))
data = test.fetchall()
print(data)
this is working and prints:
[('issue_number_1', 'issue_text')]
Second (non working) example:
# unit test input
name = "issue_number_2"
text = "issue_text"
rating_sum = 0
if name:
with self.conn:
sql_string = "INSERT INTO issue (name, text, rating_sum) VALUES (name = ?, text = ?, rating_sum = ?)"
cursor.execute(sql_string, (name, text, rating_sum,))
throws this error:
cursor.execute(sql_string, (name, text, rating_sum,))
sqlite3.OperationalError: no such column: name
the column name exists, the first example proofed that
the name: "issue_number_2" does not exist in the DB
the second example fails exactly same with only name to insert (only one parameter)
i had no problems inserting with string concatenation so the problem should be in my second example code somewhere
You need to add single quote.for example:
"INSERT INTO table (field) VALUES ('$1')"
add just values in second () and add single quote around string values.
After a lot of experiments i was a little bit confused....
This is the right syntax:
sql_string = "INSERT INTO issue (name, text, rating_sum) VALUES (?, ?, ?)"
cursor.execute(sql_string, (name, text, rating_sum,))
The statement:
INSERT INTO .... VALUES ....
is an SQL statement and the correct syntax is:
INSERT INTO tablename (col1, col2, ...) VALUES (expr1, expr2, ...)
where col1, col2, ... are columns of the table tablename and expr1, expr2, ... are expressions or literals that are evaluated and assigned to each of the columns col1, col2, ... respectively.
So the syntax that you use is not valid SQL syntax.
The assignment of the values is not performed inside VALUES(...).
The correct syntax to use in Python would be:
INSERT INTO issue (name, text, rating_sum) VALUES (?, ?, ?)
I have a list contains many lists in python.
my_list = [['city', 'state'], ['tampa', 'florida'], ['miami','florida']]
The nested list at index 0 contains the column headers, and rest of the nested lists contain corresponding values. How would I insert this into sql server using pyodbc or slqalchemy? I have been using pandas pd.to_sql and want to make this a process in pure python. Any help would be greatly appreciated.
expected output table would look like:
city |state
-------------
tampa|florida
miami|florida
Since the column names are coming from your list you have to build a query string to insert the values. Column names and table names can't be parameterised with placeholders (?).
import pyodbc
conn = pyodbc.connect(my_connection_string)
cursor = conn.cursor()
my_list = [['city', 'state'], ['tampa', 'florida'], ['miami','florida']]
columns = ','.join(my_list[0]) #String of column names
values = ','.join(['?'] * len(my_list[0])) #Placeholders for values
query = "INSERT INTO mytable({0}) VALUES ({1})".format(columns, values)
#Loop through rest of list, inserting data
for l in my_list[1:]:
cursor.execute(query, l)
conn.commit() #save changes
Update:
If you have a large number of records to insert you can do that in one go using executemany. Change the code like this:
columns = ','.join(my_list[0]) #String of column names
values = ','.join(['?'] * len(my_list[0])) #Placeholders for values
#Bulk insert
query = "INSERT INTO mytable({0}) VALUES ({1})".format(columns, values)
cursor.executemany(query, my_list[1:])
conn.commit() #save change
Assuming conn is already open connection to your database:
cursor = conn.cursor()
for row in my_list:
cursor.execute('INSERT INTO my_table (city, state) VALUES (?, ?)', row)
cursor.commit()
Since the columns value are are the first elemnts in the array, just do:
q ="""CREATE TABLE IF NOT EXISTS stud_data (`{col1}` VARCHAR(250),`{col2}` VARCHAR(250); """
sql_cmd = q.format(col1 = my_list[0][0],col2 = my_list[0][1])
mcursor.execute(sql)#Create the table with columns
Now to add the values to the table, do:
for i in range(1,len(my_list)-1):
sql = "INSERT IGNORE into test_table(city,state) VALUES (%s, %s)"
mycursor.execute(sql,my_list[i][0],my_list[i][1])
mycursor.commit()
print(mycursor.rowcount, "Record Inserted.")#Get count of rows after insertion
I have a sql file in which there are many insert sql to the same table:
insert into tbname
values (xxx1);
insert into tbname values (xxx2);
insert into
tbname values (xxx3);
...
how to convert this file into a new file containing one sql like :
insert into tbname (xxx1),(xxx2),(xxx3)...;
Due to different insert formats as above in the sql file, it make hard to use regular express in python.
If you want to insert multiple rows into the table use executemany() method.
mydb = mysql.connector.connect(
host="localhost",
user="yourusername",
passwd="yourpassword",
database="mydatabase"
)
mycursor = mydb.cursor()
sql = "INSERT INTO tbname VALUES (%s)"
val = [
('xx1'),
('xx2'),
('xx3')
]
mycursor.executemany(sql, val)
mydb.commit()
For more info, follow this.
Open your file to a string and use split to get all values in a table and then again to a string. Something like:
s = "insert into tbname values (val1, val2);insert into tbname values (val3, val4);insert into tbname values (val5, val6);"
values = s.replace(";insert into tbname values", ', ')
It's an anorthodox method but could work in your case to get it in one insert.
This question already has answers here:
How can I get dict from sqlite query?
(16 answers)
Closed 4 years ago.
Issue:
Hi, right now I am making queries to sqlite and assigning the result to variables like this:
Table structure: rowid, name, something
cursor.execute("SELECT * FROM my_table WHERE my_condition = 'ExampleForSO'")
found_record = cursor.fetchone()
record_id = found_record[0]
record_name = found_record[1]
record_something = found_record[2]
print(record_name)
However, it's very possible that someday I have to add a new column to the table. Let's put the example of adding that column:
Table structure: rowid, age, name, something
In that scenario, if we run the same code, name and something will be assigned wrongly and the print will not get me the name but the age, so I have to edit the code manually to fit the current index. However, I am working now with tables of more than 100 fields for a complex UI and doing this is tiresome.
Desired output:
I am wondering if there is a better way to catch results by using dicts or something like this:
Note for lurkers: The next snipped is made up code that does not works, do not use it.
cursor.execute_to(my_dict,
'''SELECT rowid as my_dict["id"],
name as my_dict["name"],
something as my_dict["something"]
FROM my_table WHERE my_condition = "ExampleForSO"''')
print(my_dict['name'])
I am probably wrong with this approach, but that's close to what I want. That way if I don't access the results as an index, and if add a new column, no matter where it's, the output would be the same.
What is the correct way to achieve it? Is there any other alternatives?
You can use namedtuple and then specify connection.row_factory in sqlite. Example:
import sqlite3
from collections import namedtuple
# specify my row structure using namedtuple
MyRecord = namedtuple('MyRecord', 'record_id record_name record_something')
con = sqlite3.connect(":memory:")
con.isolation_level = None
con.row_factory = lambda cursor, row: MyRecord(*row)
cur = con.cursor()
cur.execute("CREATE TABLE my_table (record_id integer PRIMARY KEY, record_name text NOT NULL, record_something text NOT NULL)")
cur.execute("INSERT INTO my_table (record_name, record_something) VALUES (?, ?)", ('Andrej', 'This is something'))
cur.execute("INSERT INTO my_table (record_name, record_something) VALUES (?, ?)", ('Andrej', 'This is something too'))
cur.execute("INSERT INTO my_table (record_name, record_something) VALUES (?, ?)", ('Adrika', 'This is new!'))
for row in cur.execute("SELECT * FROM my_table WHERE record_name LIKE 'A%'"):
print(f'ID={row.record_id} NAME={row.record_name} SOMETHING={row.record_something}')
con.close()
Prints:
ID=1 NAME=Andrej SOMETHING=This is something
ID=2 NAME=Andrej SOMETHING=This is something too
ID=3 NAME=Adrika SOMETHING=This is new!