I have a production_table and stage_table.
I have a python script that runs for few hours and generate data in the stage_table.
I want at the end of the script to COPY data from the stage_table to the production_table.
Basically this is what I want:
1. TRUNCATE production_table
2. COPY production_table from stage_table
This is my code:
from sqlalchemy import create_engine
from sqlalchemy.sql import text as sa_text
engine = create_engine("mysql+pymysql:// AMAZON AWS")
engine.execute(sa_text('''TRUNCATE TABLE {1}; COPY TABLE {1} from {0}'''.format(stage_table, production_table)).execution_options(autocommit=True))
This should generate :
TRUNCATE TABLE production_table; COPY TABLE production_table from stage_table
However this doesn't work.
sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) (1064,
u"You have an error in your SQL syntax;
How can I make it work? and how can I make sure that the TRUNCATE and COPY are together. I don't want TRUNCATE to happen if COPY aborts.
The usual way to handle multiple statements in a single transaction in SQLAlchemy would be to begin an explicit transaction and execute each statement in it:
with engine.begin() as conn:
conn.execute(statement_1)
conn.execute(statement_2)
...
As to your original attempt, there is no COPY statement in MySQL. Some other DBMS do have something of the kind. Also not all DB-API drivers support multiple statements in a single query or command, at least out of the box, which would seem to be the case here as well. See this issue and the related note in the PyMySQL ChangeLog.
The biggest issue is that not all statements in MySQL can be rolled back, of which the most common are DDL statements. In other words you simply cannot execute TRUNCATE [TABLE] ... in the same transaction as the following INSERT INTO ... and must design your application around that limitation. As suggested in the comments by Christian W. you could perhaps create an entirely new table from your staging table and rename, or just swap the production and staging tables. RENAME TABLE ... cannot be rolled back either, but at least you'd reduce the window for error, and could undo the changes since the original production table would still be there, just under a new name. You could then remove the original production table when all else is done. Here's something that demonstrates the idea, but requires manual intervention, if something goes awry:
# No point in faking transactions here, since MySQL in use.
engine.execute("CREATE TABLE new_production AS SELECT * FROM stage_table")
engine.execute("RENAME TABLE production_table TO old_production")
engine.execute("RENAME TABLE new_production TO production_table")
# Point of no return:
engine.execute("DROP TABLE old_production")
Related
I'm using psycopg3 and PostgreSQL 14. When I run a copy or exec function, and include a ON argument, it gives me the error psycopg.errors.SyntaxError: syntax error at or near "ON"
I have tried two different functions, both resulting in the same error.
cur.execute("""
CREATE TABLE temp_mls AS TABLE mls_properties WITH NO DATA
ON COMMIT DROP
""")
and
cur.copy("""COPY mls_properties (fips_code") FROM STDIN
ON CONFLICT DO NOTHING
""")
No "CONFLICT" word in https://www.postgresql.org/docs/current/sql-copy.html.
Obviously, ON COMMIT DROP should warp in an transaction. However it only apply to temp table. Quote from manual:
ON COMMIT
The behavior of temporary tables at the end of a transaction block can be controlled using ON COMMIT. The three options are:
PRESERVE ROWS
No special action is taken at the ends of transactions. This is the default behavior.
DELETE ROWS
All rows in the temporary table will be deleted at the end of each transaction block. Essentially, an automatic TRUNCATE is done at
each commit.
DROP
The temporary table will be dropped at the end of the current transaction block.
So the following will work:
BEGIN;
CREATE temp TABLE temp_mls ON COMMIT DROP AS table test WITH NO DATA;
COMMIT;
1 step: Create a temporary table with pyodbc into sql server for objects
2 step: Select objects from temporary table and load it into pandas dataframe
3 step: print dataframe
for creating a temporary table i work with pyodbc cursor as it trohws errors with pandas.read_sql command. wheras it trohws an error if i try to convert the cursor into a pandas dataframe. even with the special line for handling tuples into dataframes.
my program to connect, create, read and print which works as long as the query stays simple as it is now. (my actual approach has a few hundred lines of sql query statement)
import codecs
import os
import io
import pandas as pd
import pyodbc as po
server = 'sql_server'
database = 'sql_database'
connection = po.connect('DRIVER={SQL Server};SERVER='+server+';DATABASE='+database+';Trusted_Connection=yes;')
cursor = connection.cursor()
query1 = """
CREATE TABLE #ttobject (object_nr varchar(6), change_date datetime)
INSERT INTO #ttobject (object_nr)
VALUES
('112211'),
('113311'),
('114411');
"""
query2 = """
SELECT *
FROM #ttobject
Drop table if exists #ttobject
"""
cursor.execute(query1)
df = pd.read_sql_query(query2, connection)
print(df)
Because of the lenght of the actually query i save you the trouble but instead post here the error code:
('HY000', '[HY000] [Microsoft][ODBC SQL Server Driver]Connection is busy with results for another hstmt (0) (SQLExecDirectW)')
This error gets thrown at query2 which is a multiple select statement with some joins and pivote functions
When I'm trying to put everything into one cursor i got issues with converting it from cursor to DataFrame (tried several methodes, maybe someone knows one which isn't on SO already or has a special title so i couldn't find it)
same problem if I'm trying to only use pd.read_sql then the creation of the temporary table is not working
I don't know where to go on from here.
Please let me know if i can assist you with further details which i may overwatched in accordance to my lostlyness :S
23.5.19 Further investigating:
According to Gord i tried to add autocommit to true which will work
for simple sql statements but not for my really long and
timeconsuming one.
Secondly i tried to add
"cursor.execute('SET NOCOUNT ON; EXEC schema.proc #muted = 1')
At the moment i guess that the first query takes longer so python already starting with the second and therefore the connection is
blocked. Or that the first query is returing some feedback so python
thinks it is finished before it actually is.
Added a time.sleep(100) after ececution of first query but still getting the hstmt is busy error. Wondering why this is becaus it should have had enough time to process the first
Funfact: The query is running smoothly as long as I'm not trying to output any result from it
I have such issue that SQLAlchemy Core does not insert rows when I'm trying to insert data using connection.execute(table.insert(), list_of_rows). I construct connection object without any additional parameters, it means connection = engine.connect() and engine only with one additional parameter engine = create_engine(uri, echo=True).
Except that I can't find data in db also I can't find "INSERT" statement in logs of my app.
May be important that this issue I'm reproducing during py.test tests.
DB that I use is mssql in docker container.
EDIT1:
rowcount of proxyresult is always -1 regardless if I use transaction or no and if I changed insert to connection.execute(table.insert().execution_options(autocommit=True), list_of_rows).rowcount
EDIT2:
I rewrote this code and now it works. I don't see any major difference.
What's the inserted row count after connection.execute:
proxy = connection.execute(table.insert(), list_of_rows)
print(proxy.rowcount)
if rowcount is positive integer, it proves it indeed writes the data into DB, but may be only present in a transaction, if so you could then check whether autocommit is on: https://docs.sqlalchemy.org/en/latest/core/connections.html#understanding-autocommit
In short:
I have Postgresql database and I connect to that DB through Python's psycopg2 module. Such script might look like this:
import psycopg2
# connect to my database
conn = psycopg2.connect(dbname="<my-dbname>",
user="postgres",
password="<password>",
host="localhost",
port="5432")
cur = conn.cursor()
ins = "insert into testtable (age, name) values (%s,%s);"
data = ("90", "George")
sel = "select * from testtable;"
cur.execute(sel)
print(cur.fetchall())
# prints out
# [(100, 'Paul')]
#
# db looks like this
# age | name
# ----+-----
# 100 | Paul
# insert new data - no commit!
cur.execute(ins, data)
# perform the same select again
cur.execute(sel)
print(cur.fetchall())
# prints out
# [(100, 'Paul'),(90, 'George')]
#
# db still looks the same
# age | name
# ----+-----
# 100 | Paul
cur.close()
conn.close()
That is, I connect to that database which at the start of the script looks like this:
age | name
----+-----
100 | Paul
I perform SQL select and retrieve only Paul data. Then I do SQL insert, however without any commit, but the second SQL select still fetches both Paul and George - and I don't want that. I've looked both into psycopg and Postgresql docs and found out about ISOLATION LEVEL (see Postgresql and see psycopg2). In Postgresql docs (under 13.2.1. Read Committed Isolation Level) it explicitly says:
However, SELECT does see the effects of previous updates executed within its own transaction, even though they are not yet committed.
I've tried different isolation levels, I understand, that Read Committed and Repeatable Read don't wokr, I thought, that Serializable might work, but it does not -- meaning that I still can fetch uncommitted data with select.
I could do conn.set_isolation_level(0), where 0 represents psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT, or I could probably wrap the execute commands inside with statements (see).
After all, I am bit confused, whether I understand transactions and isolations (and the behavior of select without commit is completely normal) or not. Can somebody enlighten this topic to me?
Your two SELECT statements are using the same connection, and therefore the same transaction. From the psycopg manual you linked:
By default, the first time a command is sent to the database ... a new transaction is created. The following database commands will be executed in the context of the same transaction.
Your code is therefore equivalent to the following:
BEGIN TRANSACTION;
select * from testtable;
insert into testtable (age, name) values (90, 'George');
select * from testtable;
ROLLBACK TRANSACTION;
Isolation levels control how a transaction interacts with other transactions. Within a transaction, you can always see the effects of commands within that transaction.
If you want to isolate two different parts of your code, you will need to open two connections to the database, each of which will (unless you enable autocommit) create a separate transaction.
Note that according to the document already linked, creating a new cursor will not be enough:
...not only the commands issued by the first cursor, but the ones issued by all the cursors created by the same connection
Using autocommit will not solve your problem. When autocommit is one every insert and update is automatically committed to the database and all subsequent reads will see that data.
It's most unusual to not want to see data that has been written to the database by you. But if that's what you want, you need two separate connections and you must make sure that your select is executed prior to the commit.
I'm developing a Pylons app which is based on exisitng database, so I'm using reflection. I have an SQL file with the schema that I used to create my test database. That's why I can't simply use drop_all and create_all.
I would like to write some unit tests and I faced the problem of clearing the database content after each test. I just want to erase all the data but leave the tables intact. Is this possible?
The application uses Postgres and this is what has to be used also for the tests.
I asked about the same thing on the SQLAlchemy Google group, and I got a recipe that appears to work well (all my tables are emptied). See the thread for reference.
My code (excerpt) looks like this:
import contextlib
from sqlalchemy import MetaData
meta = MetaData()
with contextlib.closing(engine.connect()) as con:
trans = con.begin()
for table in reversed(meta.sorted_tables):
con.execute(table.delete())
trans.commit()
Edit: I modified the code to delete tables in reverse order; supposedly this should ensure that children are deleted before parents.
For PostgreSQL using TRUNCATE:
with contextlib.closing(engine.connect()) as con:
trans = con.begin()
con.execute('TRUNCATE {} RESTART IDENTITY;'.format(
','.join(table.name
for table in reversed(Base.metadata.sorted_tables))))
trans.commit()
Note: RESTART IDENTITY; ensures that all sequences are reset as well. However, this is slower than the DELETE recipe by #aknuds1 by 50%.
Another recipe is to drop all tables first and then recreate them. This is slower by another 50%:
Base.metadata.drop_all(bind=engine)
Base.metadata.create_all(bind=engine)
How about using truncate:
TRUNCATE [ TABLE ] name [, ...]
(http://www.postgresql.org/docs/8.4/static/sql-truncate.html)
This will delete all the records in the table, but leave the schema in tact.