Python migration from mysql to postgres - fk constraints not dropping - python

I'm running a python script with sqlalchemy to export and then import data from production to a postgres db on a daily basis. The script runs successfully once and then the second time and beyond the script fails. As you will see in the script below, the error returned suggets the dependencies in the tables (foreign keys) are the cause of the import failure, however, I do not understand why this issue is not circumvented by the sorted_tables object. I've opted to remove any of the intialiation code like repoistory imports, db connection objects to simplify the post and reduce clutter.
def create_db(src,dst,src_schema,dst_schema,drop_dst_schema=False):
if drop_dst_schema:
post_db.engine.execute('DROP SCHEMA IF EXISTS {0} CASCADE'.format(dst_schema))
print "Schema {0} Dropped".format(dst_schema)
post_db.engine.execute('CREATE SCHEMA IF NOT EXISTS {0}'.format(dst_schema))
post_db.engine.execute('GRANT USAGE ON SCHEMA {0} TO {0}_ro'.format(dst_schema))
post_db.engine.execute('GRANT USAGE ON SCHEMA {0} TO {0}_rw'.format(dst_schema))
print "Schema {0} Created".format(dst_schema)
def create_table(tbl, dst_schema):
dest_table=tbl
dest_table.schema=dst_schema
for col in dest_table.columns:
if hasattr(col.type, 'collation'):
col.type.collation = None
if col.name == 'id':
dest_table.append_constraint(PrimaryKeyConstraint(col))
col.type=convert(col.type)
timestamp_col=Column ('timestamp',DateTime(timezone=False), server_default=func.now())
#print tbl.c
dest_table.append_column(timestamp_col)
dest_table.create(post_db.engine,checkfirst=True)
post_db.engine.execute('GRANT INSERT ON {1} to {0}_ro'.format(dst_schema, dest_table))
post_db.engine.execute('GRANT ALL PRIVILEGES ON {1} to {0}_rw'.format(dst_schema, dest_table))
print "Table {0} created".format(dest_table)
create_db(mysql_db.engine,post_db.engine,src_schema,dst_schema,drop_dst_schema=False)
mysql_meta=MetaData(bind=mysql_db.engine)
mysql_meta.reflect(schema=src_schema)
post_meta=MetaData(bind=post_db.engine)
post_meta.reflect(schema=dst_schema)
script_begin=time.time()
rejected_list=[]
for table in mysql_meta.sorted_tables:
df=mysql_db.sql_retrieve('select * from {0}'.format(table.name))
df=df.where((pd.notnull(df)), None)
print "Table {0} : {1}".format(table.name,len(df))
dest_table=table
dest_table.schema = dst_schema
dest_table.drop(post_db.engine, checkfirst=True)
create_table(dest_table, dst_schema)
print "Table {0} emptied".format(dest_table.name)
try:
start=time.time()
if len(df)>10000:
for g,df_new in df.groupby(np.arange(len(df))//10000):
dict_items=df_new.to_dict(orient='records')
post_db.engine.connect().execute(dest_table.insert().values(dict_items))
else:
dict_items=df.to_dict(orient='records')
post_db.engine.connect().execute(dest_table.insert().values(dict_items))
loadtime=time.time()-start
print "Data loaded with datasize {0}".format(str(len(df)))
print "Table {0} loaded to BI database with loadtime {1}".format(dest_table.name,loadtime)
except:
print "Table {0} could not be loaded".format(dest_table.name)
rejected_list.append(dest_table.name)
If I drop the entire dst_schema before importing the data, the import succeeds.
This is the erorr I see:
sqlalchemy.exc.InternalError: (psycopg2.InternalError) cannot drop table A because other objects depend on it
DETAIL: constraint fk_rails_111193 on table B depends on table A
HINT: Use DROP ... CASCADE to drop the dependent objects too.
[SQL: '\nDROP TABLE A']
Can someone steer me into a possible solution?
Are there better alternatives other than dropping the dst_schema before importing the data to the destination db (drop_dst_schema=true)?
def create_db(src,dst,src_schema,dst_schema,drop_dst_schema=True)
Has anyone have an idea why sorted_tables does not drop the dependencies in the schema? Am I misunderstanding this object?

You have several options:
Drop the whole schema every time
If you have a complex schema, with any kind of closed loop reference chain, your best option is to always drop the whole schema.
You could have some self-referencing tables (such as a persons table, with a self-relation of type person parent-of person). You could also have a schema where table A references table B which references table A. For instance, you have one table persons and one companies, and two relations (probably, with intermediat tables): company employs persons, and persons trade shares of companies.
In cases like this, that are realistic, no matter what you do with sorted_tables, this will never work.
If you're actually replicating data from another DB, and can afford the time, dropping and recreating the whole schema is the easiest to implement solution. Your code will be much simpler: less cases to consider.
DROP CASCADE
You can also drop the tables using DROP CASCADE. If one table is referenced by another, this will drop both (or as many as necessary). You have to make sure the order in which you DROP and CREATE gives you the end result you expect. I'd check very carefully that this scenario works in all cases.
Drop all FK constraints, then recreate them at the end
Also, there is also one last possibility: drop all FK constraints for all tables before manipulating them, and recreate them at the end. This way, you'll be able to drop any table at any moment.

Related

Using arbitrary sqlalchemy select results (e.g., from a CTE) to create ORM instances

When I create an instance of an object (in my example below, a Company) I want to automagically create default, related objects. One way is to use a per-row, after-insert trigger, but I'm trying to avoid that route and use CTEs which are easier to read and maintain. I have this SQL working (underlying db is PostgreSQL and the only thing you need to know about table company is its primary key is: id SERIAL PRIMARY KEY and it has one other required column, name VARCHAR NOT NULL):
with new_company as (
-- insert my company row, returning the whole row
insert into company (name)
values ('Acme, Inc.')
returning *
),
other_related as (
-- herein I join to `new_company` and create default related rows
-- in other tables. Here we use, effectively, a no-op - what it
-- actually does is not germane to the issue.
select id from new_company
)
-- Having created the related rows, we return the row we inserted into
-- table `company`.
select * from new_company;
The above works like a charm and with the recently added Select.add_cte() (in sqlalchemy 1.4.21) I can write the above with the following python:
import sqlalchemy as sa
from myapp.models import Company
new_company = (
sa.insert(Company)
.values(name='Acme, Inc.')
.returning(Company)
.cte(name='new_company')
)
other_related = (
sa.select(sa.text('new_company.id'))
.select_from(new_company)
.cte('other_related')
)
fetch_company = (
sa.select(sa.text('* from new_company'))
.add_cte(other_related)
)
print(fetch_company)
And the output is:
WITH new_company AS
(INSERT INTO company (name) VALUES (:param_1) RETURNING company.id, company.name),
other_related AS
(SELECT new_company.id FROM new_company)
SELECT * from new_company
Perfect! But when I execute the above query I get back a Row:
>>> result = session.execute(fetch_company).fetchone()
>>> print(result)
(26, 'Acme, Inc.')
I can create an instance with:
>>> result = session.execute(fetch_company).fetchone()
>>> company = Company(**result)
But this instance, if added to the session, is in the wrong state, pending, and if I flush and/or commit, I get a duplicate key error because the company is already in the database.
If I try using Company in the select list, I get a bad query because sqlalchemy automagically sets the from-clause and I cannot figure out how to clear or explicitly set the from-clause to use my CTE.
I'm looking for one of several possible solutions:
annotate an arbitrary query in some way to say, "build an instance of MyModel, but use this table/alias", e.g., query = sa.select(Company).select_from(new_company.alias('company'), reset=True).
tell a session that an instance is persistent regardless of what the session thinks about the instance, e.g., company = Company(**result); session.add(company, force_state='persistent')
Obviously I could do another round-trip to the db with a call to session.merge() (as discussed in early comments of this question) so the instance ends up in the correct state, but that seems terribly inefficient especially if/when used to return lists of instances.

Sqlalchemy adding multiple records and potential constraint violation

I have table with unique contraint on one colum like:
CREATE TABLE entity (
id INT NOT NULL AUTO_INCREMENT,
zip_code INT NOT NULL,
entity_url VARCHAR(255) NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY ix_uniq_zip_code_entity_url (zip_code, entity_url)
);
and corresponding SQLAlchemy model. I adding a lot of records and do not want to commit session after each record. My assumption that better to call session.add(new_record) multiple times and one time session.commit().
But while adding new records I could get IntegrityError because constraint is violated. It is normal situation and I want just skip such records insertion. But looks like I can only revert entire transaction.
Also I do not want to add another complex checks "get all records from database where zip_code in [...] and entity_url in [...] then drop matched data from records_to_insert".
Is there way to instuct SQLAlchemy drop records which violates constraint?
My assumption that better to call session.add(new_record) multiple times and one time session.commit().
You might want to revisit this assumption. Batch processing of a lot of records usually lends itself to multiple commits -- what if you have 10k records and your code raises an exception on the 9,999th? You'll be forced to start over. The core question here is whether or not it makes sense for one of the records to exist in the database without the rest. If it does, then there's no problem committing on each entry (performance issues aside). In that case, you can simply catch the IntegrityError and call session.rollback() to continue down the list of records.
In any case, a similar question was asked on the SQLA mailing list and answered by the library's creator, Mike Bayer. He recommended removing the duplicates from the list of new records yourself, as this is easy to do with a dictionary or a set. This could be as simple as a dict comprehension:
new_entities = { (entity['zip_code'], entity['url']): entity for entity in new_entities}
(This would pick the last-seen duplicate as the one to add to the DB.)
Also note that he uses the SQLAlchemy core library to perform the inserts, rather than the ORM's session.add() method:
sess.execute(Entry.__table__.insert(), params=inserts)
This is a much faster option if you're dealing with a lot of records (like in his example, with 100,000 records).
If you decide to insert your records row by row, you can check if it already exists before you do your insert. This may or may not be more elegant and efficient:
def record_exists(session, some_id):
return session.query(exists().where(YourEntity.id == some_id)).scalar()
for item in items:
if not record_exists(session, item.some_id):
session.add(record)
session.flush()
else:
print "Already exists, skipping..."
session.commit()

how to write sql to update some field given only one record in the target table

I got a table named test in MySQL database.
There are some fields in the test table, say, name.
However, there is only 0 or 1 record in the table.
When new record , say name = fox, comes, I'd like to update the targeted field of the table test.
I use python to handle MySQL and my question is how to write the sql.
PS. I try not to use where expression, but failed.
Suppose I've got the connection to the db, like the following:
conn = MySQLdb.connect(host=myhost, ...)
What you need here is a query which does the Merge kind of operation on your data. Algorithmically:
When record exists
do Update
Else
do Insert
You can go through this article to get a fair idea on doing things in this situation:
http://www.xaprb.com/blog/2006/06/17/3-ways-to-write-upsert-and-merge-queries-in-mysql/
What I personally recommend is the INSERT.. ON DUPLICATE KEY UPDATE
In your scenario, something like
INSERT INTO test (name)
VALUES ('fox')
ON DUPLICATE KEY UPDATE
name = 'fox';
Using this kind of a query you can handle the situation in one single shot.

Bulk upsert (insert-update) a csv in postgres [duplicate]

A very frequently asked question here is how to do an upsert, which is what MySQL calls INSERT ... ON DUPLICATE UPDATE and the standard supports as part of the MERGE operation.
Given that PostgreSQL doesn't support it directly (before pg 9.5), how do you do this? Consider the following:
CREATE TABLE testtable (
id integer PRIMARY KEY,
somedata text NOT NULL
);
INSERT INTO testtable (id, somedata) VALUES
(1, 'fred'),
(2, 'bob');
Now imagine that you want to "upsert" the tuples (2, 'Joe'), (3, 'Alan'), so the new table contents would be:
(1, 'fred'),
(2, 'Joe'), -- Changed value of existing tuple
(3, 'Alan') -- Added new tuple
That's what people are talking about when discussing an upsert. Crucially, any approach must be safe in the presence of multiple transactions working on the same table - either by using explicit locking, or otherwise defending against the resulting race conditions.
This topic is discussed extensively at Insert, on duplicate update in PostgreSQL?, but that's about alternatives to the MySQL syntax, and it's grown a fair bit of unrelated detail over time. I'm working on definitive answers.
These techniques are also useful for "insert if not exists, otherwise do nothing", i.e. "insert ... on duplicate key ignore".
9.5 and newer:
PostgreSQL 9.5 and newer support INSERT ... ON CONFLICT (key) DO UPDATE (and ON CONFLICT (key) DO NOTHING), i.e. upsert.
Comparison with ON DUPLICATE KEY UPDATE.
Quick explanation.
For usage see the manual - specifically the conflict_action clause in the syntax diagram, and the explanatory text.
Unlike the solutions for 9.4 and older that are given below, this feature works with multiple conflicting rows and it doesn't require exclusive locking or a retry loop.
The commit adding the feature is here and the discussion around its development is here.
If you're on 9.5 and don't need to be backward-compatible you can stop reading now.
9.4 and older:
PostgreSQL doesn't have any built-in UPSERT (or MERGE) facility, and doing it efficiently in the face of concurrent use is very difficult.
This article discusses the problem in useful detail.
In general you must choose between two options:
Individual insert/update operations in a retry loop; or
Locking the table and doing batch merge
Individual row retry loop
Using individual row upserts in a retry loop is the reasonable option if you want many connections concurrently trying to perform inserts.
The PostgreSQL documentation contains a useful procedure that'll let you do this in a loop inside the database. It guards against lost updates and insert races, unlike most naive solutions. It will only work in READ COMMITTED mode and is only safe if it's the only thing you do in the transaction, though. The function won't work correctly if triggers or secondary unique keys cause unique violations.
This strategy is very inefficient. Whenever practical you should queue up work and do a bulk upsert as described below instead.
Many attempted solutions to this problem fail to consider rollbacks, so they result in incomplete updates. Two transactions race with each other; one of them successfully INSERTs; the other gets a duplicate key error and does an UPDATE instead. The UPDATE blocks waiting for the INSERT to rollback or commit. When it rolls back, the UPDATE condition re-check matches zero rows, so even though the UPDATE commits it hasn't actually done the upsert you expected. You have to check the result row counts and re-try where necessary.
Some attempted solutions also fail to consider SELECT races. If you try the obvious and simple:
-- THIS IS WRONG. DO NOT COPY IT. It's an EXAMPLE.
BEGIN;
UPDATE testtable
SET somedata = 'blah'
WHERE id = 2;
-- Remember, this is WRONG. Do NOT COPY IT.
INSERT INTO testtable (id, somedata)
SELECT 2, 'blah'
WHERE NOT EXISTS (SELECT 1 FROM testtable WHERE testtable.id = 2);
COMMIT;
then when two run at once there are several failure modes. One is the already discussed issue with an update re-check. Another is where both UPDATE at the same time, matching zero rows and continuing. Then they both do the EXISTS test, which happens before the INSERT. Both get zero rows, so both do the INSERT. One fails with a duplicate key error.
This is why you need a re-try loop. You might think that you can prevent duplicate key errors or lost updates with clever SQL, but you can't. You need to check row counts or handle duplicate key errors (depending on the chosen approach) and re-try.
Please don't roll your own solution for this. Like with message queuing, it's probably wrong.
Bulk upsert with lock
Sometimes you want to do a bulk upsert, where you have a new data set that you want to merge into an older existing data set. This is vastly more efficient than individual row upserts and should be preferred whenever practical.
In this case, you typically follow the following process:
CREATE a TEMPORARY table
COPY or bulk-insert the new data into the temp table
LOCK the target table IN EXCLUSIVE MODE. This permits other transactions to SELECT, but not make any changes to the table.
Do an UPDATE ... FROM of existing records using the values in the temp table;
Do an INSERT of rows that don't already exist in the target table;
COMMIT, releasing the lock.
For example, for the example given in the question, using multi-valued INSERT to populate the temp table:
BEGIN;
CREATE TEMPORARY TABLE newvals(id integer, somedata text);
INSERT INTO newvals(id, somedata) VALUES (2, 'Joe'), (3, 'Alan');
LOCK TABLE testtable IN EXCLUSIVE MODE;
UPDATE testtable
SET somedata = newvals.somedata
FROM newvals
WHERE newvals.id = testtable.id;
INSERT INTO testtable
SELECT newvals.id, newvals.somedata
FROM newvals
LEFT OUTER JOIN testtable ON (testtable.id = newvals.id)
WHERE testtable.id IS NULL;
COMMIT;
Related reading
UPSERT wiki page
UPSERTisms in Postgres
Insert, on duplicate update in PostgreSQL?
http://petereisentraut.blogspot.com/2010/05/merge-syntax.html
Upsert with a transaction
Is SELECT or INSERT in a function prone to race conditions?
SQL MERGE on the PostgreSQL wiki
Most idiomatic way to implement UPSERT in Postgresql nowadays
What about MERGE?
SQL-standard MERGE actually has poorly defined concurrency semantics and is not suitable for upserting without locking a table first.
It's a really useful OLAP statement for data merging, but it's not actually a useful solution for concurrency-safe upsert. There's lots of advice to people using other DBMSes to use MERGE for upserts, but it's actually wrong.
Other DBs:
INSERT ... ON DUPLICATE KEY UPDATE in MySQL
MERGE from MS SQL Server (but see above about MERGE problems)
MERGE from Oracle (but see above about MERGE problems)
Here are some examples for insert ... on conflict ... (pg 9.5+) :
Insert, on conflict - do nothing.
insert into dummy(id, name, size) values(1, 'new_name', 3)
on conflict do nothing;`
Insert, on conflict - do update, specify conflict target via column.
insert into dummy(id, name, size) values(1, 'new_name', 3)
on conflict(id)
do update set name = 'new_name', size = 3;
Insert, on conflict - do update, specify conflict target via constraint name.
insert into dummy(id, name, size) values(1, 'new_name', 3)
on conflict on constraint dummy_pkey
do update set name = 'new_name', size = 4;
I am trying to contribute with another solution for the single insertion problem with the pre-9.5 versions of PostgreSQL. The idea is simply to try to perform first the insertion, and in case the record is already present, to update it:
do $$
begin
insert into testtable(id, somedata) values(2,'Joe');
exception when unique_violation then
update testtable set somedata = 'Joe' where id = 2;
end $$;
Note that this solution can be applied only if there are no deletions of rows of the table.
I do not know about the efficiency of this solution, but it seems to me reasonable enough.
SQLAlchemy upsert for Postgres >=9.5
Since the large post above covers many different SQL approaches for Postgres versions (not only non-9.5 as in the question), I would like to add how to do it in SQLAlchemy if you are using Postgres 9.5. Instead of implementing your own upsert, you can also use SQLAlchemy's functions (which were added in SQLAlchemy 1.1). Personally, I would recommend using these, if possible. Not only because of convenience, but also because it lets PostgreSQL handle any race conditions that might occur.
Cross-posting from another answer I gave yesterday (https://stackoverflow.com/a/44395983/2156909)
SQLAlchemy supports ON CONFLICT now with two methods on_conflict_do_update() and on_conflict_do_nothing():
Copying from the documentation:
from sqlalchemy.dialects.postgresql import insert
stmt = insert(my_table).values(user_email='a#b.com', data='inserted data')
stmt = stmt.on_conflict_do_update(
index_elements=[my_table.c.user_email],
index_where=my_table.c.user_email.like('%#gmail.com'),
set_=dict(data=stmt.excluded.data)
)
conn.execute(stmt)
http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html?highlight=conflict#insert-on-conflict-upsert
MERGE in PostgreSQL v. 15
Since PostgreSQL v. 15, is possible to use MERGE command. It actually has been presented as the first of the main improvements of this new version.
It uses a WHEN MATCHED / WHEN NOT MATCHED conditional in order to choose the behaviour when there is an existing row with same criteria.
It is even better than standard UPSERT, as the new feature gives full control to INSERT, UPDATE or DELETE rows in bulk.
MERGE INTO customer_account ca
USING recent_transactions t
ON t.customer_id = ca.customer_id
WHEN MATCHED THEN
UPDATE SET balance = balance + transaction_value
WHEN NOT MATCHED THEN
INSERT (customer_id, balance)
VALUES (t.customer_id, t.transaction_value)
WITH UPD AS (UPDATE TEST_TABLE SET SOME_DATA = 'Joe' WHERE ID = 2
RETURNING ID),
INS AS (SELECT '2', 'Joe' WHERE NOT EXISTS (SELECT * FROM UPD))
INSERT INTO TEST_TABLE(ID, SOME_DATA) SELECT * FROM INS
Tested on Postgresql 9.3
Since this question was closed, I'm posting here for how you do it using SQLAlchemy. Via recursion, it retries a bulk insert or update to combat race conditions and validation errors.
First the imports
import itertools as it
from functools import partial
from operator import itemgetter
from sqlalchemy.exc import IntegrityError
from app import session
from models import Posts
Now a couple helper functions
def chunk(content, chunksize=None):
"""Groups data into chunks each with (at most) `chunksize` items.
https://stackoverflow.com/a/22919323/408556
"""
if chunksize:
i = iter(content)
generator = (list(it.islice(i, chunksize)) for _ in it.count())
else:
generator = iter([content])
return it.takewhile(bool, generator)
def gen_resources(records):
"""Yields a dictionary if the record's id already exists, a row object
otherwise.
"""
ids = {item[0] for item in session.query(Posts.id)}
for record in records:
is_row = hasattr(record, 'to_dict')
if is_row and record.id in ids:
# It's a row but the id already exists, so we need to convert it
# to a dict that updates the existing record. Since it is duplicate,
# also yield True
yield record.to_dict(), True
elif is_row:
# It's a row and the id doesn't exist, so no conversion needed.
# Since it's not a duplicate, also yield False
yield record, False
elif record['id'] in ids:
# It's a dict and the id already exists, so no conversion needed.
# Since it is duplicate, also yield True
yield record, True
else:
# It's a dict and the id doesn't exist, so we need to convert it.
# Since it's not a duplicate, also yield False
yield Posts(**record), False
And finally the upsert function
def upsert(data, chunksize=None):
for records in chunk(data, chunksize):
resources = gen_resources(records)
sorted_resources = sorted(resources, key=itemgetter(1))
for dupe, group in it.groupby(sorted_resources, itemgetter(1)):
items = [g[0] for g in group]
if dupe:
_upsert = partial(session.bulk_update_mappings, Posts)
else:
_upsert = session.add_all
try:
_upsert(items)
session.commit()
except IntegrityError:
# A record was added or deleted after we checked, so retry
#
# modify accordingly by adding additional exceptions, e.g.,
# except (IntegrityError, ValidationError, ValueError)
db.session.rollback()
upsert(items)
except Exception as e:
# Some other error occurred so reduce chunksize to isolate the
# offending row(s)
db.session.rollback()
num_items = len(items)
if num_items > 1:
upsert(items, num_items // 2)
else:
print('Error adding record {}'.format(items[0]))
Here's how you use it
>>> data = [
... {'id': 1, 'text': 'updated post1'},
... {'id': 5, 'text': 'updated post5'},
... {'id': 1000, 'text': 'new post1000'}]
...
>>> upsert(data)
The advantage this has over bulk_save_objects is that it can handle relationships, error checking, etc on insert (unlike bulk operations).

removing error query during commit

I have postgresql db which i am updating with around 100000 records. I use session.merge() to insert/update each record and i do a commit after every 1000 records.
i=0
for record in records:
i+=1
session.merge(record)
if i%1000 == 0:
session.commit()
This code works fine. In my database i have a table with a UNIQUE field and there are some duplicated records that i insert into it. A error is thrown when this happens, saying the field is not unique. Since i am inserting 1000 records at a time, a rollback will not help me to skip these records. is there any way i can skip the session.merge() for the duplicate records (other than parsing through all the records to find the duplicate records of course)?
I think you already know this, but let's start out with a piece of dogma: you specified that the field needs to be unique, so you have to let the database check for uniqueness or deal with the errors from not letting that happen.
Checking for uniqueness:
if value not in database:
session.add(value)
session.commit()
Not checking for uniqueness and catching the exception.
try:
session.add(value)
session.commit()
except IntegrityError:
session.rollback()
The first one has a race condition. I tend to use the second pattern.
Now, bringing this back to your actual issue, if you want to assure uniqueness on a column in the database then obviously you're going to have to either let the db assure itself of the loaded value's actual uniqueness, or let the database give you an error and you handle it.
That's obviously a lot slower than adding 100k objects to the session and just committing them all, but that's how databases work.
You might want to consider massaging the data which you are loading OUTSIDE the database and BEFORE attempting to load it, to ensure uniqueness. That way, when you load it you can drop the need to check for uniqueness. Pretty easy to do with command line tools if for example you're loading from csv or text files.
you can get at a "partial rollback" using SAVEPOINT, which SQLAlchemy exposes via begin_nested(). You could do it just like this:
for i, record in enumerate(records):
try:
with session.begin_nested():
session.merge(record)
except:
print "Skipped record %s" % record
if not i % 1000:
session.commit()
notes for the above:
in python, we never do the "i = i+1" thing. use enumerate().
with session.begin_nested(): is the same as saying begin_nested(), then commit() if no exception, or rollback() if so.
You might want to consider writing a function along the lines of this example from the PostgreSQL documentation.
This is the option which works best for me because the number of records with duplicate unique keys is minimal.
def update_exception(records, i, failed_records):
failed_records.append(records[i]['pk'])
session.rollback()
start_range = int(round(i/1000,0) * 1000)
for index in range(start_range, i+1):
if records[index]['pk'] not in failed_records:
ins_obj = Model()
try:
session.merge(ins_obj)
except:
failed_records.append(json_data[table_name][index-1]['pk'])
pass
Say, if i hit an error at 2375 I store the primary key 'pk' for the 2375 record in failed_records and then i recommit from 2000 to 2375. It seems much faster than doing commits one by one.

Categories