I am trying to add a new column in existing table and want to populate that column in database, there is a predictions column which is dataframe it is giving me error what I am doing wrong,
Code:
conn = create_connection()
cur = conn.cursor()
query = "ALTER TABLE STOCK_MARKET_FORECASTING ADD COLUMN predictions float"
cur.execute(query)
# Inserting predictions in database
def inserting_records(df):
for i in range(0 ,len(df)):
values = (df['Predicted_values_Hourly_Interval'][i])
cur.execute("UPDATE STOCK_MARKET_FORECASTING SET (predictions) VALUES (%s)", values)
conn.commit()
print("Records created successfully")
inserting_records(predictions)
You're passing in a single value – cur.execute requires a tuple of values.
You're probably looking for INSERT, not UPDATE. UPDATE updates existing rows.
def inserting_records(df):
series = df['Predicted_values_Hourly_Interval']
for val in series:
cur.execute("INSERT INTO STOCK_MARKET_FORECASTING (predictions) VALUES (%s)", (val, ))
conn.commit()
might be what you're looking for.
Related
I have the following dataframe:
data = [['Alex', 182.2],['Bob', 183.2],['Clarke', 188.4], ['Kelly', NA]]
df = pd.DataFrame(data, columns = ['Name', 'Height'])
I have the following SQL Server table:
create table dbo.heights as (
name varchar(10),
height float
)
This is my code to upload the data to my table:
for index,row in df.iterrows():
cursor.execute('INSERT INTO dbo.heights(name, height) values (?, ?)', row.name, row.height)
cnxn.commit()
cursor.close()
cnxn.close()
I want to upload the dataframe into my SQL Server table, but it fails on the null value. I tried replacing the NA with an np.nan value and it still failed. I also tried changing the height column to an "object" and replacing the NA with None and that also failed.
Please use the following instead:
for index, row in df.iterrows():
query = "INSERT INTO dbo.heights(name, height) values (?, ?)"
data = [row.name, row.height]
cursor.execute(query, data)
cursor.commit()
Or use the following:
query = "INSERT INTO dbo.heights(name, height) values (?, ?)"
data = [row.name, row.height for index, row in df.iterrows()]
cursor.executemany(query, data)
cursor.commit()
You'll see your None values as None in Python and as NULL in your database.
I tried replacing the NA with an np.nan
Because in such case you have to first define dataframe schema and make it nullable float.
"By default, SeriesSchema/Column objects assume that values are not nullable. In order to accept null values, you need to explicitly specify nullable=True, or else you’ll get an error."
Further Reading
Try like this
for index,row in df.iterrows():
cursor.execute("INSERT INTO table (`name`, `height`) VALUES (%s, %s)", (row.name, row.height))
cnxn.commit()
cursor.close()
cnxn.close()
I'm trying to store very large int values in sqlite3 db. the values are 100-115 digits long.
I've tried every possible combination - send the input as string/integer and store it as int/text/blob, but the result is always the same - the value of 7239589231682139...97853 becomes 7.239589231682139e+113 in the db.
My db schema is:
conn.execute('''CREATE TABLE DATA
RESULT TEXT NOT NULL''')
and the query is:
def insert(result):
conn.execute(f'INSERT INTO DATA (RESULT) VALUES ({result})')
conn.commit()
I wrote a simple function to test the above case:
DB_NAME = 'test.db'
conn = sqlite3.connect(DB_NAME)
conn.execute(('''CREATE TABLE TEST_TABLE
(TYPE_INT INT,
TYPE_REAL REAL,
TYPE_TEXT TEXT,
TYPE_BLOB BLOB);
'''))
value1 = '123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890'
value2 = 123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
conn.execute(f'INSERT INTO TEST_TABLE (TYPE_INT, TYPE_REAL, TYPE_TEXT, TYPE_BLOB) VALUES ({value1}, {value1}, {value1}, {value1})')
conn.execute(f'INSERT INTO TEST_TABLE (TYPE_INT, TYPE_REAL, TYPE_TEXT, TYPE_BLOB) VALUES ({value2}, {value2}, {value2}, {value2})')
conn.commit()
cursor = conn.execute('SELECT * from TEST_TABLE')
for col in cursor:
print(f'{col[0]}, {col[1]}, {col[2]}, {col[3]}')
print('--------------')
conn.close()
As you can see - I try all the possibilites, and the output is:
1.2345678901234568e+119, 1.2345678901234568e+119, 1.23456789012346e+119, 1.2345678901234568e+119
1.2345678901234568e+119, 1.2345678901234568e+119, 1.23456789012346e+119, 1.2345678901234568e+119
You are passing a value without single quotes so it is considered numeric.
Pass it as a string like this:
value1 = "123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890"
conn.execute("INSERT INTO TEST_TABLE (TYPE_TEXT) VALUES (?)", (value1,))
The ? placeholder will be replaced by:
'123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890'
because the value's type is string and it will be stored properly as TEXT which must be the data type of the column.
I have a list contains many lists in python.
my_list = [['city', 'state'], ['tampa', 'florida'], ['miami','florida']]
The nested list at index 0 contains the column headers, and rest of the nested lists contain corresponding values. How would I insert this into sql server using pyodbc or slqalchemy? I have been using pandas pd.to_sql and want to make this a process in pure python. Any help would be greatly appreciated.
expected output table would look like:
city |state
-------------
tampa|florida
miami|florida
Since the column names are coming from your list you have to build a query string to insert the values. Column names and table names can't be parameterised with placeholders (?).
import pyodbc
conn = pyodbc.connect(my_connection_string)
cursor = conn.cursor()
my_list = [['city', 'state'], ['tampa', 'florida'], ['miami','florida']]
columns = ','.join(my_list[0]) #String of column names
values = ','.join(['?'] * len(my_list[0])) #Placeholders for values
query = "INSERT INTO mytable({0}) VALUES ({1})".format(columns, values)
#Loop through rest of list, inserting data
for l in my_list[1:]:
cursor.execute(query, l)
conn.commit() #save changes
Update:
If you have a large number of records to insert you can do that in one go using executemany. Change the code like this:
columns = ','.join(my_list[0]) #String of column names
values = ','.join(['?'] * len(my_list[0])) #Placeholders for values
#Bulk insert
query = "INSERT INTO mytable({0}) VALUES ({1})".format(columns, values)
cursor.executemany(query, my_list[1:])
conn.commit() #save change
Assuming conn is already open connection to your database:
cursor = conn.cursor()
for row in my_list:
cursor.execute('INSERT INTO my_table (city, state) VALUES (?, ?)', row)
cursor.commit()
Since the columns value are are the first elemnts in the array, just do:
q ="""CREATE TABLE IF NOT EXISTS stud_data (`{col1}` VARCHAR(250),`{col2}` VARCHAR(250); """
sql_cmd = q.format(col1 = my_list[0][0],col2 = my_list[0][1])
mcursor.execute(sql)#Create the table with columns
Now to add the values to the table, do:
for i in range(1,len(my_list)-1):
sql = "INSERT IGNORE into test_table(city,state) VALUES (%s, %s)"
mycursor.execute(sql,my_list[i][0],my_list[i][1])
mycursor.commit()
print(mycursor.rowcount, "Record Inserted.")#Get count of rows after insertion
I'm attempting to update my sqlite db with 2 python lists. I have a sqlite db with three fields. Name, number, date. I also have three python lists with similar names. I'm trying to figure out a way to update my sqlite db with data from these 2 lists. I can get the db created, and even get a single column filled, but I cant seem to update it correctly or at all. Is there a way to INSERT both lists at once? Rather than INSERT a single column and then UPDATE the db with the other?
Here is what I have so far:
name_list = []
number_list = []
date = now.date()
strDate = date.strftime("%B %Y")
tableName = strDate
sqlTable = 'CREATE TABLE IF NOT EXISTS ' + tableName + '(name text, number integer, date text)'
c.execute(sqlTable)
conn.commit()
for i in name_list:
c.execute('INSERT INTO January2018(names) VALUES (?)', [i])
conn.commit()
I can't seem to get past this point. I still need to add another list of data (number_list) and attach the date to each row.
Here's what I have on that:
for i in number_list:
c.execute('UPDATE myTable SET number = ? WHERE name', [i])
conn.commit()
Any help would be much appreciated. And if you need more information, please let me know.
You can use executemany with zip:
c.executemany('INSERT INTO January2018 (name, number) VALUES (?, ?)', zip(name_list, number_list))
conn.commit()
I have constructed a database but when I loop through my data to populate it, I get the following error:
OperationalError: no such column: tmp1
Code:
with con:
cur = con.cursor()
cur.execute("CREATE TABLE TESTTABLE(X REAL, Y REAL)")
for i in xrange(0,5):
tmp1 = array[i,0]
tmp2 = array[i,1]
with con:
cur.execute("""INSERT INTO TESTTABLE VALUES(tmp1,tmp2)""")
Basically I have a big array that I want to transfer into a database. This probably isn't the most efficient way of going about it. Suggestions?
If you want to insert values into a row, you need to pass those values along as SQL parameters to the .execute() call:
with con:
for i in xrange(0,5):
tmp1 = array[i, 0]
tmp2 = array[i, 1]
cur.execute("""INSERT INTO TESTTABLE VALUES(?, ?)""", (tmp1, tmp2))
The ? characters are parameters, and they are filled, in order, by values takes from the second argument to .execute(), a tuple. The above code will insert the numbers 0 through to 4 as pairs into the database.
Names in the SQL code have no correlation to names you define in Python, values can only be passed in explicitly.