I am using SQLAlchemy and pg8000 to connect to a Postgres database.
I have checked table pg_stat_activity, which shows me a few select queries in 'idle in transaction' state, many of those. But the application much more reads than writes, that is, inserts are few and far between.
I suspect that a new transaction is created for each query, even for simple select statements.
Is it possible to run a read-only query without the need for a transaction? So that it does not need to be committed/rolled back?
Currently, the app runs its queries with method sqlalchemy.engine.Engine.execute for CRUD operations and cursors for calling stored procedures. How should I update these method calls to indicate I want some of them not to start transactions?
Related
Is it possible to run a python flask app that pulls from a sql database, while also running a python script that updates the sql database every few seconds?
Yes, databases are designed to handle this type of concurrent access. If the database is in the middle of an update, it will wait until the update is complete before handling the Flask app's query, and it will complete the query before starting the next incoming update.
I'd like to make a query to a Postgres database then somehow lock the returned rows so that other SQLAlchemy threads/processes cannot modify those rows. In the same session/transaction of the query, I'd like to update the rows I received from the query and then commit the changes. Anyone know what to do?
I tried implementing a query with the with_for_update(nowait=True) function, but this throws an OperationalError. I could catch this exception and simply the query again, but I'd like to offload this to the db if possible.
I'm using:
Postgres 9.4.1
SQLAlchemy 1.0.11 (ORM)
Flask-Restful
FlaskSQLAlchemy
I'm prepared to use straight SQLAlchemy if it's not possible with FlaskSQLAlchemy.
I'm writing a Python application that interacts with a MySql database. The basic way to do this is to connect to the database and execute the queries using the database's cursor object. Should I hardcode each query in the python code, or should I put each query in a stored procedure ? Which is the better design choice (keeping security and elegance in mind) ? Note that many of the queries are single-liners (select queries).
I'm currently using SQLAlchemy with two distinct session objects. In one object, I am inserting rows into a mysql database. In the other session I am querying that database for the max row id. However, the second session is not querying the latest from the database. If I query the database manually, I see the correct, higher max row id.
How can I force the second session to query the live database?
The first session needs to commit to flush changes to the database.
first_session.commit()
Session holds all the objects in memory and flushes them together to the database (lazy loading, for efficiency). Thus the changes made by first_session are not visible to the second_session which is reading data from the database.
Had a similar problem, for some reason i had to commit both sessions. Even the one that is only reading.
This might be a problem with my code though, cannot use same session as it the code will run on different machines. Also documentation of SQLalchemy says that each session should be used by one thread only, although 1 reading and 1 writing should not be a problem.
I am currently working on a new web application that needs to execute an SQL statement before giving a session to the application itself.
In detail: I am running a PostgreSQL database server with multiple schemas and I need to execute a SET search_path statement before the application uses the session. I am also using the ZopeTransactionExtension to have transactions automatically handled at the request level.
To ensure the exectuion of the SQL statement, there seem to be two possible ways:
Executing the statement at the Engine/Connection level via SQLAlchemy events (from Multi-tenancy with SQLAlchemy)
Executing the statement at the session level (from SQLAlchemy support of Postgres Schemas)
Since I am using a scoped session and want to keep my transactions intact, I wonder which of these ways will possibly disturb transaction management.
For example, does the Engine hand out a new connection from the Pool on every query? Or is it attached to the session for its lifetime, i.e. until the request has been processed and the session & transaction are closed/committed?
On the other hand, since I am using a scoped session, can I perform it the way zzzeek suggested it in the second link? That is, is the context preserved and automatically reset once the transaction is over?
Is there possibly a third way that I am missing?
For example, does the Engine hand out a new connection from the Pool on every query?
only if you have autocommit=True, which should not be the case.
Or is it attached to the session for its lifetime, i.e. until the request has been processed and the session & transaction are closed/committed?
it's attached per transaction. But the "search_path" in Postgresql is per postgresql session (not to be confused with SQLAlchemy session) - its basically the lifespan of the connection itself.
The session (and the engine, and the pool) these days has a ton of event hooks you can grab onto in order to set up state like this. If you want to stick with the Session you can try after_begin.