Upsert / merge tables in SQLite - python

I have created a database using sqlite3 in python that has thousands of tables. Each of these tables contains thousands of rows and ten columns. One of the columns is the date and time of an event: it is a string that is formatted as YYYY-mm-dd HH:MM:SS, which I have defined to be the primary key for each table. Every so often, I collect some new data (hundreds of rows) for each of these tables. Each new dataset is pulled from a server and loaded in directly as a pandas data frame or is stored as a CSV file. The new data contains the same ten columns as my original data. I need to update the tables in my database using this new data in the following way:
Given a table in my database, for each row in the new dataset, if the date and time of the row matches the date and time of an existing row in my database, update the remaining columns of that row using the values in the new dataset.
If the date and time does not yet exist, create a new row and insert it to my database.
Below are my questions:
I've done some searching on Google and it looks like I should be using the UPSERT (merge) functionality of sqlite but I can't seem to find any examples showing how to use it. Is there an actual UPSERT command, and if so, could someone please provide an example (preferably with sqlite3 in Python) or point me to a helpful resource?
Also, is there a way to do this in bulk so that I can UPSERT each new dataset into my database without having to go row by row? (I found this link, which suggests that it is possible, but I'm new to using databases and am not sure how to actually run the UPSERT command.)
Can UPSERT also be performed directly using pandas.DataFrame.to_sql?
My backup solution is loading in the table to be UPSERTed using pd.read_sql_query("SELECT * from table", con), performing pandas.DataFrame.merge, deleting the said table from the database, and then adding in the updated table to the database using pd.DataFrame.to_sql (but this would be inefficient).

Instead of going through upsert command, why don't you create your own algorithim that will find values and replace them if date & time is found, else it will insert new row. Check out my code, i wrote for you. Let me know if you are still confused. You can even do that for hundereds of tables just by replacing table name in algorithim with some variable and changing it for the whole list of your table names.
import sqlite3
import pandas as pd
csv_data = pd.read_csv("my_CSV_file.csv") # Your CSV Data Path
def manual_upsert():
con = sqlite3.connect(connection_str)
cur = con.cursor()
cur.execute("SELECT * FROM my_CSV_data") # Viewing Data from Column
data = cur.fetchall()
old_data_list = [] # Collection of All Dates already in Database table.
for line in data:
old_data_list.append(line[0]) # I suppose you Date Column is on 0 Index.
for new_data in csv_data:
if new_data[0] in old_data_list:
cur.execute("UPDATE my_CSV_data SET column1=?, column2=?, column3=? WHERE my_date_column=?", # it will update column based on date if condition is true
(new_data[1],new_data[2],new_data[3],new_data[0]))
else:
cur.execute("INSERT INTO my_CSV_data VALUES(?,?,?,?)", # It will insert new row if date is not found.
(new_data[0],new_data[1],new_data[2],new_data[3]))
con.commit()
con.close()
manual_upsert()

First, even though the questions are related, ask them separately in the future.
There is documentation on UPSERT handling in SQLite that documents how to use it but it is a bit abstract. You can check examples and discussion here: SQLite - UPSERT *not* INSERT or REPLACE
Use a transaction and the statements are going to be executed in bulk.
As presence of this library suggests to_sql does not create UPSERT commands (only INSERT).

Related

best way to upsert 300 million entries into postgres?

I have a new csv file every day with 400 million+ entries which I need to upsert into my database (3 tables with 2 foreign keys, indexed). The majority of the entries are already in the table, in which case I need to update a column. Some entries, which are not already in the table need to be inserted.
I tried to insert the CSV each day into a temptable then run:
INSERT INTO restaurants (name, food_id, street_id, datecreated, lastdayobservedopen) SELECT DISTINCT temptable.name, typesoffood.food_id, location.street_id, temptable.datecreated, temptable.lastdayobservedopen FROM temptable INNER JOIN typesoffood on typesoffood.food_type = temptable.food_type INNER JOIN location ON location.street_name = temptable.street_name ON CONFLICT ON CONSTRAINT restaurants_pk DO UPDATE SET lastdayobservedopen = EXCLUDED.lastdayobservedopen
But it takes over 6 hrs.
Is it possible to make this faster?
Edit:
Some more details: 3 tables- restaurants(name, food_id, street_id, datecreated, lastdayobservedopen) with pk (name, street_id) and fks (food_id and street_id); typesoffood(food_id, food_type) with pk (food_id) and index on food_type; location(street_id, street_name) with pk (street_id) and index on street_name; as for the csv file, I don’t know which are new or old entries, but I do know that the majority of the entries are already in the database which would require me to update the lastdayobserved date. The rest are to be inserted with the lastdayobserved date as today. This is supposed to help distinguish between restaurants that are no longer in operation (in which case their lastdayobserved column would not be updated) and currently operating restaurants whose date in that column should always match today’s date. Open to more efficient schema suggestions, as well. Thanks to all!
There is a function in sql called bulk insert can handle large volume of data:
bulk insert #temp
from "file location path"
If you can change you postgres settings you could take advantage of parallelism in Postgres. Otherwise you could at least speed up the csv upload using Postgres's bulk upload otherwise known as the COPY command.
Without more details it's hard to give better advice.

Less Memory-intense way of copying tables & renaming columns in sqlite/pandas

I have found a very nice way to:
read a table from a sql database
rename the columns with a dict (read from a yaml file)
rewrite the table to another database
The only problem is, that as the table becomes bigger(10col x several million rows), reading the table into a pandas is so memory-intensive, that it causes the process to be killed.
There must be an easier way. I looked at alter table statements but they seem to be very complicated as well& will not do the copying in another db. Any ideas on how to do the same operation without using this much memory. Feeling like pandas are a crutch I use due to my bad sql.
import pandas as pd
import sqlite3
def translate2generic(sourcedb, targetdb, sourcetable,
targettable, toberenamed):
"""Change table's column names to fit generic api keys.
:param: Path to source db
:param: Path to target db
:param: Name of table to be translated in source
:param: Name of the newly to be created table in targetdb
:param: dictionary of translations
:return: New column names in target db
"""
sourceconn = sqlite3.connect(sourcedb)
targetconn = sqlite3.connect(targetdb)
table = pd.read_sql_query('select * from ' + sourcetable, sourceconn) #this is the line causing the crash
# read dict in the format {"oldcol1name": "newcol1name", "oldcol2name": "newcol2name"}
rename = {v: k for k, v in toberenamed.items()}
# rename columns
generic_table = table.rename(columns=rename)
# Write table to new database
generic_table.to_sql(targettable, targetconn, if_exists="replace")
targetconn.close()
sourceconn.close()
I've looked also at solutions such as this one but they suppose you know the type of the columns.
An elegant solution would be very much appreciated.
Edit: I know there is a method in sqlite since the September release 3.25.0, but I am stuck with version 2.6.0
To elaborate on my comments...
If you have a table in foo.db and want to copy that table's data to a new table in bar.db with different column names:
$ sqlite3 foo.db
sqlite> ATTACH 'bar.db' AS bar;
sqlite> CREATE TABLE bar.newtable(newcolumn1, newcolumn2);
sqlite> INSERT INTO bar.newtable SELECT oldcolumn1, oldcolumn2 FROM main.oldtable;

Copy row from Cassandra database and then insert it using Python

I'm using plugin DataStax Python Driver for Apache Cassandra.
I want to read 100 rows from database and then insert them again into database after changing one value. I do not want to miss previous records.
I know how to get my rows:
rows = session.execute('SELECT * FROM columnfamily LIMIT 100;')
for myrecord in rows:
print(myrecord.timestamp)
I know how to insert new rows into database:
stmt = session.prepare('''
INSERT INTO columnfamily (rowkey, qualifier, info, act_date, log_time)
VALUES (, ?, ?, ?, ?)
IF NOT EXISTS
''')
results = session.execute(stmt, [arg1, arg2, ...])
My problems are that:
I do not know how to change only one value in a row.
I don't know how to insert rows into database without using CQL. My columnfamily has more than 150 columns and writing all their names in query does not seem as a best idea.
To conclude:
Is there a way to get rows, modify one value from every one of them and then insert this rows into database without using only CQL?
First, you need to select only needed columns from Cassandra - it will be faster to transfer the data. You need to include all columns of primary key + column that you want to change.
After you get the data, you can use UPDATE command to update only necessary column (example from documentation):
UPDATE cycling.cyclist_name
SET comments ='='Rides hard, gets along with others, a real winner'
WHERE id = fb372533-eb95-4bb4-8685-6ef61e994caa
You can also use prepared statement to make it more performant...
But be careful - the UPDATE & INSERT in CQL are really UPSERTs, so if you change columns that are part of primary key, then it will create new entry...

Bulk update Postgres column from python dataframe

I am using the below python code to update postgres DB column valuebased on Id. This loop has to run for thousands of records and it is taking longer time.
Is there a way where I can pass array of dataframe values instead of looping each row?
for i in range(0,len(df)):
QUERY=""" UPDATE "Table" SET "value"='%s' WHERE "Table"."id"='%s'
""" % (df['value'][i], df['id'][i])
cur.execute(QUERY)
conn.commit()
Depends on a library you use to communicate with PostgreSQL, but usually bulk inserts are much faster via COPY FROM command.
If you use psycopg2 it is as simple as following:
cursor.copy_from(io.StringIO(string_variable), "destination_table", columns=('id', 'value'))
Where string_variable is tab and new line delimited dataset like 1\tvalue1\n2\tvalue2\n.
To achieve a performant bulk update I would do:
Create a temporary table: CREATE TEMPORARY TABLE tmp_table;;
Insert records with copy_from;
Just update destination table with query UPDATE destination_table SET value = t.value FROM tmp_table t WHERE id = t.id or any other preferred syntax

Update existing row in database from pandas df

I have a PostgreSQL db. Pandas has a 'to_sql' function to write the records of a dataframe into a database. But I haven't found any documentation on how to update an existing database row using pandas when im finished with the dataframe.
Currently I am able to read a database table into a dataframe using pandas read_sql_table. I then work with the data as necessary. However I haven't been able to figure out how to write that dataframe back into the database to update the original rows.
I dont want to have to overwrite the whole table. I just need to update the rows that were originally selected.
One way is to make use of an sqlalchemy "table class" and session.merge(row), session.commit():
Here is an example:
for row in range(0, len(df)):
row_data = table_class(column_1=df.ix[i]['column_name'],
column_2=df.ix[i]['column_name'],
...
)
session.merge(row_data)
session.commit()
For sql alchemy case of read table as df, change df, then update table values based on df, I found the df.to_sql to work with name=<table_name> index=False if_exists='replace'
This should replace the old values in the table with the ones you changed in the df

Categories