Python sqlAlchemy bulk update but non-serializable JSON attributes - python

I am trying to better understand how I can bulk update rows in sqlAlchemy using a Python function for each row that requires dumping results to json without having to iterate over them individually:
def do_something(x):
return x.id + x.offset
table.update({Table.updated_field: do_something(Table)})
This is a simplification of what I am trying to accomplish except I get the error TypeError: Object of type InstrumentedAttribute is not JSON serializable.
Any thoughts on how to fix the issue here?

Why are you casting your Table Id to a json String? remove it and try.
Edit:
You can't call the same object in bulk, you can for example:
table.update({Table.updated_field: json.dumps(object_of_my_table variable._asdict())})
If you want update your column attribute with the whole object you will must loop and dump it in the update as:
for table in dbsession.query(Table):
table.update_field = json.dumps(table._asdict())
dbsession.add(table).

If you need to update millions of rows the best way from my experience is with Pandas and bulk_update_mappings.
You can load data from the DB in bulk as a DataFrame with read_sql by passing a query statement and your engine object.
import pandas as pd
query = session.query(Table)
table_data = pd.read_sql(query.statement, engine)
Note that read_sql has a chunksize parameter which causes it to return an iterator, so if the table is too large to fit in memory, you can throw this in a loop with however many rows your pc can handle at once:
for table_chunk in pd.read_sql(query.statement, engine, chunksize=1e6):
...
From there you can use apply to alter each column with any custom function you want:
table_data["column_1"] = table_data["column_1"].apply(do_something)
Then, converting the DataFrame to a dict with the records orientation puts it in the appropriate format for bulk_update_mappings:
table_data = table_data.to_dict("records")
session.bulk_update_mappings(Table, table_data)
session.commit()
Additionally, if you need to perform a lot of json operations for your updates, I've used orjson for updates like this in the past which also provides a notable speed improvement over the standard library's json.

Without the requirement to serialise to JSON,
session.query(Table).update({'updated_field': Table.id + Table.offset})
would work fine, performing all computations and updates in the database. However
session.query(Table).update({'updated_field': json.dumps(Table.id + Table.offset)})
does not work, because it mixes Python-level operations (json.dumps) with database-level operations (add id and offset for all rows).
Fortunately, many RDBMS provide JSON functions (SQLite, PostgreSQL, MariaDB, MySQL) so that we can do the work solely in the database layer. This is considerably more efficient that fetching data into the Python layer, mutating it, and writing it back to the database. Unfortunately, the available functions and their behaviours are not consistent across RDBMS.
The following script should work for SQLite, PostgreSQL and MaraiDB (and probably MySQL too). These assumptions are made:
id and offset are both columns in the database table being updated
both are integers, as is their sum
the desired result is that their sum is written to a JSON column as an scalar
import sqlalchemy as sa
from sqlalchemy import orm
from sqlalchemy.dialects.postgresql import JSONB
urls = [
'sqlite:///so73956014.db',
'postgresql+psycopg2:///test',
'mysql+pymysql://root:root#localhost/test',
]
for url in urls:
engine = sa.create_engine(url, echo=False, future=True)
print(f'Checking {engine.dialect.name}')
Base = orm.declarative_base()
JSON = JSONB if engine.dialect.name == 'postgresql' else sa.JSON
class Table(Base):
__tablename__ = 't73956014'
id = sa.Column(sa.Integer, primary_key=True)
offset = sa.Column(sa.Integer)
updated_field = sa.Column(JSON)
Base.metadata.drop_all(engine, checkfirst=True)
Base.metadata.create_all(engine)
Session = orm.sessionmaker(engine, future=True)
with Session.begin() as s:
ts = [Table(offset=o * 10) for o in range(1, 4)]
s.add_all(ts)
# Use DB-specific function to serialise to JSON.
if engine.dialect.name == 'postgresql':
func = sa.func.to_jsonb
else:
func = sa.func.json_quote
# MariaDB requires that argument to json_quote is character type.
if engine.dialect.name in ['mysql', 'mariadb']:
expr = sa.cast(Table.id + Table.offset, sa.Text)
else:
expr = Table.id + Table.offset
with Session.begin() as s:
s.query(Table).update(
{Table.updated_field: func(expr)}
)
with Session() as s:
ts = s.scalars(sa.select(Table))
for t in ts:
print(t.id, t.offset, t.updated_field)
engine.dispose()
Output:
Checking sqlite
1 10 11
2 20 22
3 30 33
Checking postgresql
1 10 11
2 20 22
3 30 33
Checking mysql
1 10 11
2 20 22
3 30 33
Other functions can be used if the desired result is an object or array. If updating an existing JSON column value the column my need to use the Mutable extension.

Related

How can we update the multiple ids which are in csv file in python script?

I am trying to update the table based on id. I have multiple ids more than 1.4 million. So I have kept the 1.4 million ids in one CSV file and in my code using chunks. As of now I have kept 10 records in my CSV file for testing. But my update query is not working. in the id values passing like id = '3 a6P3a0000058StwEAE.
import pandas as pd
from sqlalchemy import create_engine
DF = pd.read_csv("/home/bha/harddeleteid10.csv", names=['colm_id'], header=None)
def chunker(seq, size):
return (seq [pos:pos + size] for pos in range (0, len(seq), size))
for i in chunker(DF['colm_id'],3):
engine = create_engine("db", echo=True)
conn = engine.connect()
conn.autocommit = True
conn.execute(f"""update salesforce_fr.apttus_proposal__proposal_line_item__c_poc set isdeleted=True where id = '{i}'""")
Below is id passing in query from the above loop(i is in series datatype).
update salesforce_fr.apttus_proposal__proposal_line_item__c_poc set isdeleted=True where id = '6 a6P3a0000057TGuEAM
The main problem is with the textual SQLAlchemy statement. It does not follow the sqlalchemy textual SQL recommendations.
Replace this
conn.execute(f"""update salesforce_fr.apttus_proposal__proposal_line_item__c_poc set isdeleted=True where id = '{i}'""")
With this
# build a sqlalchemy statement using textual SQL
statement = text("update salesforce_fr.apttus_proposal__proposal_line_item__c_poc set isdeleted=True where id = (:x)")
statement.bindparams(x=i)
conn.execute(statement)
There was a lot of extra code in your example. I hope this helps.

Reading an SQL query into a Dask DataFrame

I'm trying create a function that takes an SQL SELECT query as a parameter and use dask to read its results into a dask DataFrame using the dask.read_sql_query function. I am new to dask and to SQLAlchemy.
I first tried this:
import dask.dataFrame as dd
query = "SELECT name, age, date_of_birth from customer"
df = dd.read_sql_query(sql=query, con=con_string, index_col="name", npartitions=10)
As you probably already know, this won't work because the sql parameter has to be an SQLAlchemy selectable and more importantly, TextClause isn't supported.
I then wrapped the query behind a select like this:
import dask.dataFrame as dd
from sqlalchemy import sql
query = "SELECT name, age, date_of_birth from customer"
sa_query = sql.select(sql.text(query))
df = dd.read_sql_query(sql=sa_query, con=con_string, index_col="name")
This fails too with a very weird error that I have been trying to solve. The problem is that dask needs to infer the types of the columns and it does so by reading the first head_row rows in the table - 5 rows by default - and infer the types there. This line in the dask codebase adds a LIMIT ? to the query, which ends up being
SELECT name, age, date_of_birth from customer LIMIT param_1
The param_1 doesn't get substituted at all with the right value - 5 in this case. It then fails on the next line, https://github.com/dask/dask/blob/main/dask/dataframe/io/sql.py#L119, tjat evaluates the SQL expression.
sqlalchemy.exc.ProgrammingError: (mariadb.ProgrammingError) You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT name, age, date_of_birth from customer
LIMIT ?' at line 1
[SQL: SELECT SELECT name, age, date_of_birth from customer
LIMIT ?]
[parameters: (5,)]
(Background on this error at: https://sqlalche.me/e/14/f405)
I can't understand why param_1 wasn't substituted with the value of head_rows. One can see from the error message that it detects there's a parameter that needs to be used for the substitution but for some reason it doesn't actually substitute it.
Perhaps, I didn't correctly create the SQLAlchemy selectable?
I can simply use pandas.read_sql and create a dask dataframe from the resulting pandas dataframe but that defeats the purpose of using dask in the first place.
I have the following constraints:
I cannot change the function to accept a ready-made sqlalchemy
selectable. This feature will be added to a private library used at
my company and various projects using this library do not use
sqlalchemy.
Passing meta to the custom function is not an option because it would require the caller do create it. However, passing a meta attribute to read_sql_query and setting head_rows=0 is completely ok as long as there's an efficient way to retrieve/create
while dask-sql might work for this case, using it is not an
option, unfortunately
How can I go about correctly reading an SQL query into dask dataframe?
The crux of the problem is this line:
sa_query = sql.select(sql.text(query))
What is happening is that we are constructing a nested SELECT query,
which can cause a problem downstream.
Let's first create a test database:
# create a test database (using https://stackoverflow.com/a/64898284/10693596)
from sqlite3 import connect
from dask.datasets import timeseries
con = "delete_me_test.sqlite"
db = connect(con)
# create a pandas df and store (timestamp is dropped to make sure
# that the index is numeric)
df = (
timeseries(start="2000-01-01", end="2000-01-02", freq="1h", seed=0)
.compute()
.reset_index()
)
df.to_sql("ticks", db, if_exists="replace")
Next, let's try to get things working with pandas without sqlalchemy:
from pandas import read_sql_query
con = "sqlite:///test.sql"
query = "SELECT * FROM ticks LIMIT 3"
meta = read_sql_query(sql=query, con=con).set_index("index")
print(meta)
# id name x y
# index
# 0 998 Ingrid 0.760997 -0.381459
# 1 1056 Ingrid 0.506099 0.816477
# 2 1056 Laura 0.316556 0.046963
Now, let's add sqlalchemy functions:
from pandas import read_sql_query
from sqlalchemy.sql import text, select
con = "sqlite:///test.sql"
query = "SELECT * FROM ticks LIMIT 3"
sa_query = select(text(query))
meta = read_sql_query(sql=sa_query, con=con).set_index("index")
# OperationalError: (sqlite3.OperationalError) near "SELECT": syntax error
# [SQL: SELECT SELECT * FROM ticks LIMIT 3]
# (Background on this error at: https://sqlalche.me/e/14/e3q8)
Note the SELECT SELECT due to running sqlalchemy.select on an existing query. This can cause problems. How to fix this? In general, I don't think there's a safe and robust way of transforming arbitrary SQL queries into their sqlalchemy equivalent, but if this is for an application where you know that users will only run SELECT statements, you can manually sanitize the query before passing it to sqlalchemy.select:
from dask.dataframe import read_sql_query
from sqlalchemy.sql import select, text
con = "sqlite:///test.sql"
query = "SELECT * FROM ticks"
def _remove_leading_select_from_query(query):
if query.startswith("SELECT "):
return query.replace("SELECT ", "", 1)
else:
return query
sa_query = select(text(_remove_leading_select_from_query(query)))
ddf = read_sql_query(sql=sa_query, con=con, index_col="index")
print(ddf)
print(ddf.head(3))
# Dask DataFrame Structure:
# id name x y
# npartitions=1
# 0 int64 object float64 float64
# 23 ... ... ... ...
# Dask Name: from-delayed, 2 tasks
# id name x y
# index
# 0 998 Ingrid 0.760997 -0.381459
# 1 1056 Ingrid 0.506099 0.816477
# 2 1056 Laura 0.316556 0.046963

Update multiple rows in MySQL with Pandas dataframe

I have worked on a dataframe (previously extracted from a table with SQLAlchemy), and now I want to retrieve the changes updating that table.
I have done it in this very unefficient way:
engine = sql.create_engine(connect_string)
connection = engine.connect()
metadata = sql.MetaData()
pbp = sql.Table('playbyplay', metadata, autoload=True, autoload_with=engine)
for i in range(1,len(playbyplay_substitutions)):
query_update = ('update playbyplay set Player_1_Visitor = {0}, Player_2_Visitor = {1} ,Player_3_Visitor = {2} ,Player_4_Visitor = {3} ,Player_5_Visitor = {4} where id_match = {5} and actionNumber = {6}'.format(playbyplay_substitutions.loc[i,'Player_1_Visitor_y'], playbyplay_substitutions.loc[i,'Player_2_Visitor_y'], playbyplay_substitutions.loc[i,'Player_3_Visitor_y'], playbyplay_substitutions.loc[i,'Player_4_Visitor_y'], playbyplay_substitutions.loc[i,'Player_5_Visitor_y'], playbyplay_substitutions.loc[i,'id_match'],playbyplay_substitutions.loc[i,'actionNumber']))
connection.execute(query_update)
playbyplay_substitutions is my dataframe, playbyplay is my table, and the rest are the fields that I want to update or the keys in my table. I am looking for a more efficient solution than the one that I currently have for SQLAlchemy integrated with MySQL.
Consider using proper placeholders instead of manually formatting strings:
query_update = sql.text("""
UPDATE playbyplay
SET Player_1_Visitor = :Player_1_Visitor_y
, Player_2_Visitor = :Player_2_Visitor_y
, Player_3_Visitor = :Player_3_Visitor_y
, Player_4_Visitor = :Player_4_Visitor_y
, Player_5_Visitor = :Player_5_Visitor_y
WHERE id_match = :id_match AND actionNumber = :actionNumber
""")
# .iloc[1:] mimics the original for-loop that started from 1
args = playbyplay_substitutions[[
'Player_1_Visitor_y', 'Player_2_Visitor_y', 'Player_3_Visitor_y',
'Player_4_Visitor_y', 'Player_5_Visitor_y', 'id_match',
'actionNumber']].iloc[1:].to_dict('record')
connection.execute(query_update, args)
If your driver is sufficiently clever, this allows it to prepare a statement once and reuse it over the data, instead of emitting queries one by one. This also avoids possible accidental SQL injection problems, where your data resembles SQL constructs when formatted as a string manually.

How to return specific dictionary keys from within a nested list from a jsonb column in sqlalchemy

I am attempting to return some named columns from a jsonb data set that is stored with PostgreSQL.
I am able to run a raw query that meets my needs directly, however I am trying to run the query utilising SQLAlchemy, in order to ensure that my code is 'pythonic' and easy to read.
The query that returns the correct result (two columns) is:
SELECT
tmp.item->>'id',
tmp.item->>'name'
FROM (SELECT jsonb_array_elements(t.data -> 'users') AS item FROM tpeople t) as tmp
Example json (each user has 20+ columns)
{ "results":247, "users": [
{"id":"202","regdate":"2015-12-01","name":"Bob Testing"},
{"id":"87","regdate":"2014-12-12","name":"Sally Testing"},
{"id":"811", etc etc}
...
]}
The table is simple enough, with a PK, datetime of json extraction, and the jsonb column for the extract
CREATE TABLE tpeople
(
record_id bigint NOT NULL DEFAULT nextval('"tpeople_record_id_seq"'::regclass) ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1 ),
scrape_time timestamp without time zone NOT NULL,
data jsonb NOT NULL,
CONSTRAINT "tpeople_pkey" PRIMARY KEY (record_id)
);
Additionally I have a People Class that looks as follows:
class people(Base):
__tablename__ = 'tpeople'
record_id = Column(BigInteger, primary_key=True, server_default=text("nextval('\"tpeople_record_id_seq\"'::regclass)"))
scrape_time = Column(DateTime, nullable=False)
data = Column(JSONB(astext_type=Text()), nullable=False)
Presently my code to return the two columns looks like this:
from db.db_conn import get_session // Generic connector for my db
from model.models import people
from sqlalchemy import func,
sess = get_session()
sub = sess.query(func.jsonb_array_elements(people.data["users"]).label("item")).subquery()
test = sess.query(sub.c.item).select_entity_from(sub).all()
SQLAlchemy generates the following SQL:
SELECT anon_1.item AS anon_1_item
FROM (SELECT jsonb_array_elements(tpeople.data -> %(data_1)s) AS item
FROM tpeople) AS anon_1
{'data_1': 'users'}
But nothing I seem to do can allow me to only get certain columns within the item itself like the raw SQL I can write. Some of the approaches I have tried as follows (they all error out):
test = sess.query("sub.item.id").select_entity_from(sub).all()
test = sess.query(sub.item.["id"]).select_entity_from(sub).all()
aas = func.jsonb_to_recordset(people.data["users"])
res = sess.query("id").select_from(aas).all()
sub = select(func.jsonb_array_elements(people.data["users"]).label("item"))
Presently I can extract the columns I need in a simple for loop, but this seems like a hacky way to do it, and I'm sure there is something dead obvious I'm missing.
for row in test:
print(row.item['id'])
Searched for a few hours eventually found some who accidentally did this while trying to get another result.
sub = sess.query(func.jsonb_array_elements(people.data["users"]).label("item")).subquery()
tmp = sub.c.item.op('->>')('id')
tmp2 = sub.c.item.op('->>')('name')
test = sess.query(tmp, tmp2).all()

Pandas to_sql fails on duplicate primary key

I'd like to append to an existing table, using pandas df.to_sql() function.
I set if_exists='append', but my table has primary keys.
I'd like to do the equivalent of insert ignore when trying to append to the existing table, so I would avoid a duplicate entry error.
Is this possible with pandas, or do I need to write an explicit query?
There is unfortunately no option to specify "INSERT IGNORE". This is how I got around that limitation to insert rows into that database that were not duplicates (dataframe name is df)
for i in range(len(df)):
try:
df.iloc[i:i+1].to_sql(name="Table_Name",if_exists='append',con = Engine)
except IntegrityError:
pass #or any other action
You can do this with the method parameter of to_sql:
from sqlalchemy.dialects.mysql import insert
def insert_on_duplicate(table, conn, keys, data_iter):
insert_stmt = insert(table.table).values(list(data_iter))
on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(insert_stmt.inserted)
conn.execute(on_duplicate_key_stmt)
df.to_sql('trades', dbConnection, if_exists='append', chunksize=4096, method=insert_on_duplicate)
for older versions of sqlalchemy, you need to pass a dict to on_duplicate_key_update. i.e., on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(dict(insert_stmt.inserted))
please note that the "if_exists='append'" related to the existing of the table and what to do in case the table not exists.
The if_exists don't related to the content of the table.
see the doc here: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html
if_exists : {‘fail’, ‘replace’, ‘append’}, default ‘fail’
fail: If table exists, do nothing.
replace: If table exists, drop it, recreate it, and insert data.
append: If table exists, insert data. Create if does not exist.
Pandas has no option for it currently, but here is the Github issue. If you need this feature too, just upvote for it.
The for loop method above slow things down significantly. There's a method parameter you can pass to panda.to_sql to help achieve customization for your sql query
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_sql.html#pandas.DataFrame.to_sql
The below code should work for postgres and do nothing if there's a conflict with primary key "unique_code". Change your insert dialects for your db.
def insert_do_nothing_on_conflicts(sqltable, conn, keys, data_iter):
"""
Execute SQL statement inserting data
Parameters
----------
sqltable : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
Column names
data_iter : Iterable that iterates the values to be inserted
"""
from sqlalchemy.dialects.postgresql import insert
from sqlalchemy import table, column
columns=[]
for c in keys:
columns.append(column(c))
if sqltable.schema:
table_name = '{}.{}'.format(sqltable.schema, sqltable.name)
else:
table_name = sqltable.name
mytable = table(table_name, *columns)
insert_stmt = insert(mytable).values(list(data_iter))
do_nothing_stmt = insert_stmt.on_conflict_do_nothing(index_elements=['unique_code'])
conn.execute(do_nothing_stmt)
pd.to_sql('mytable', con=sql_engine, method=insert_do_nothing_on_conflicts)
Pandas doesn't support editing the actual SQL syntax of the .to_sql method, so you might be out of luck. There's some experimental programmatic workarounds (say, read the Dataframe to a SQLAlchemy object with CALCHIPAN and use SQLAlchemy for the transaction), but you may be better served by writing your DataFrame to a CSV and loading it with an explicit MySQL function.
CALCHIPAN repo: https://bitbucket.org/zzzeek/calchipan/
I had trouble where I was still getting the IntegrityError
...strange but I just took the above and worked it backwards:
for i, row in df.iterrows():
sql = "SELECT * FROM `Table_Name` WHERE `key` = '{}'".format(row.Key)
found = pd.read_sql(sql, con=Engine)
if len(found) == 0:
df.iloc[i:i+1].to_sql(name="Table_Name",if_exists='append',con = Engine)
In my case, I was trying to insert new data in an empty table, but some of the rows are duplicated, almost the same issue here, I "may" think about fetching existing data and merge with the new data I got and continue in process, but this is not optimal, and may work only for small data, not a huge tables.
As pandas do not provide any kind of handling for this situation right now, I was looking for a suitable workaround for this, so I made my own, not sure if that will work or not for you, but I decided to control my data first instead of luck of waiting if that worked or not, so what I did is removing duplicates before I call .to_sql so if any error happens, I know more about my data and make sure I know what is going on:
import pandas as pd
def write_to_table(table_name, data):
df = pd.DataFrame(data)
# Sort by price, so we remove the duplicates after keeping the lowest only
data.sort(key=lambda row: row['price'])
df.drop_duplicates(subset=['id_key'], keep='first', inplace=True)
#
df.to_sql(table_name, engine, index=False, if_exists='append', schema='public')
So in my case, I wanted to keep the lowest price of rows (btw I was passing an array of dict for data), and for that, I did sorting first, not necessary but this is an example of what I mean with control the data that I want to keep.
I hope this will help someone who got almost the same as my situation.
When you use SQL Server you'll get a SQL error when you enter a duplicate value into a table that has a primary key constraint. You can fix it by altering your table:
CREATE TABLE [dbo].[DeleteMe](
[id] [uniqueidentifier] NOT NULL,
[Value] [varchar](max) NULL,
CONSTRAINT [PK_DeleteMe]
PRIMARY KEY ([id] ASC)
WITH (IGNORE_DUP_KEY = ON)); <-- add
Taken from https://dba.stackexchange.com/a/111771.
Now your df.to_sql() should work again.
The solutions by Jayen and Huy Tran helped me a lot, but they didn't work straight out of the box. The problem I faced with Jayen code is that it requires that the DataFrame columns be exactly as those of the database. This was not true in my case as there were some DataFrame columns that I won't write to the database.
I modified the solution so that it considers the column names.
from sqlalchemy.dialects.mysql import insert
import itertools
def insertWithConflicts(sqltable, conn, keys, data_iter):
"""
Execute SQL statement inserting data, whilst taking care of conflicts
Used to handle duplicate key errors during database population
This is my modification of the code snippet
from https://stackoverflow.com/questions/30337394/pandas-to-sql-fails-on-duplicate-primary-key
The help page from https://docs.sqlalchemy.org/en/14/core/dml.html#sqlalchemy.sql.expression.Insert.values
proved useful.
Parameters
----------
sqltable : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
Column names
data_iter : Iterable that iterates the values to be inserted. It is a zip object.
The length of it is equal to the chunck size passed in df_to_sql()
"""
vals = [dict(zip(z[0],z[1])) for z in zip(itertools.cycle([keys]),data_iter)]
insertStmt = insert(sqltable.table).values(vals)
doNothingStmt = insertStmt.on_duplicate_key_update(dict(insertStmt.inserted))
conn.execute(doNothingStmt)
I faced the same issue and I adopted the solution provided by #Huy Tran for a while, until my tables started to have schemas.
I had to improve his answer a bit and this is the final result:
def do_nothing_on_conflicts(sql_table, conn, keys, data_iter):
"""
Execute SQL statement inserting data
Parameters
----------
sql_table : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
Column names
data_iter : Iterable that iterates the values to be inserted
"""
columns = []
for c in keys:
columns.append(column(c))
if sql_table.schema:
my_table = table(sql_table.name, *columns, schema=sql_table.schema)
# table_name = '{}.{}'.format(sql_table.schema, sql_table.name)
else:
my_table = table(sql_table.name, *columns)
# table_name = sql_table.name
# my_table = table(table_name, *columns)
insert_stmt = insert(my_table).values(list(data_iter))
do_nothing_stmt = insert_stmt.on_conflict_do_nothing()
conn.execute(do_nothing_stmt)
How to use it:
history.to_sql('history', schema=schema, con=engine, method=do_nothing_on_conflicts)
The idea is the same as #Nfern's but uses recursive function to divide the df into half in each iteration to skip the row/rows causing the integrity violation.
def insert(df):
try:
# inserting into backup table
df.to_sql("table",con=engine, if_exists='append',index=False,schema='schema')
except:
rows = df.shape[0]
if rows>1:
df1 = df.iloc[:int(rows/2),:]
df2 = df.iloc[int(rows/2):,:]
insert(df1)
insert(df2)
else:
print(f"{df} not inserted. Integrity violation, duplicate primary key/s")

Categories