How to set query execution timeout in sqlalchemy sessions - python

I am using MSSQL as my database. It is taking too long to respond back for some queries and sometimes it is resulting in deadlocks. Hence i want to set custom timeout for queries. I got some custom solution here but it is for psycopg2 and sqlalchemy but i need it for MSSQL and sqlalchemy. Is there any way to accomplish this?
Thanks in advance.

Related

Retrieve Status of Rows Processed for long query using PYODBC / SQLAlchemy in Python

I have several SQL queries that I run regularly but can take a while to complete. I've noticed that different applications which query SQL Server might show a status update (E.g. 1200 rows received).
I was wondering if within SQL Alchemy or PYODBC there was a way to retrieve this, perhaps treat the query as a threaded worker and poll it as it completes.
Thanks,

Orator ORM: How to change database?

On Orator documentation, I can only find a way to change connection. I've already checked the Orator Connection Model, but database setter is nowhere to be found.
Is there a way to change the database on Orator ORM without creating multiple connections? Thank you.
Github Issue Link: https://github.com/sdispater/orator/issues/326
This was answered in the github issue by josephmancuso:
No. You'll need to switch between the connections.
A connection is just a group of database settings.
You can technically do so if you query manually, but this bypasses the whole benefit of using an ORM:
db.select('select * from db2.table') #runs a raw query

lock rows so others can't modify them using Postgres and SQLAlchemy ORM

I'd like to make a query to a Postgres database then somehow lock the returned rows so that other SQLAlchemy threads/processes cannot modify those rows. In the same session/transaction of the query, I'd like to update the rows I received from the query and then commit the changes. Anyone know what to do?
I tried implementing a query with the with_for_update(nowait=True) function, but this throws an OperationalError. I could catch this exception and simply the query again, but I'd like to offload this to the db if possible.
I'm using:
Postgres 9.4.1
SQLAlchemy 1.0.11 (ORM)
Flask-Restful
FlaskSQLAlchemy
I'm prepared to use straight SQLAlchemy if it's not possible with FlaskSQLAlchemy.

Django shell(plus) timeout on remote database connection

I'm migrating a legacy db into a bunch of models I have running locally. I connected to the legacy db and ran inspectdb to recreate the models. Now I'm writing functions to pair the numerous fields to their equivalents in new models. I've been using shell_plus, and the first minute or so of queries go great, but my connections keep timing out
with the following:
RemoteArticle.objects.using("remote_mysql").all()
django.db.utils.OperationalError: (2013, 'Lost connection to MySQL server during query')
Is there a command I can run to either a) reconnect to the db before running a query (so I don't have to reopen shell_plus), or ideally b) make it so that all of my queries automatically reconnect each time I run them?
I've seen timeout issues on other platforms, but I wasn't sure if Django had a built-in way of handling such things.
Thanks!
There is a page in the MySQL docs on this. Since you're apparently trying to migrate a big database, this part may apply to you:
Sometimes the “during query” form happens when millions of rows are
being sent as part of one or more queries. If you know that this is
happening, you should try increasing net_read_timeout from its default
of 30 seconds to 60 seconds or longer, sufficient for the data
transfer to complete.
The timeout makes sense, because the all() is just one query to retrieve all rows. So, reconnecting before each query is not the solution. If changing the net_read_timeout is not an option, you might want to think about paging.
I believe Lost connection to MySQL server during query happens because you exhaust the MySQL resource like timeout, session, and memory.
If the problem is because of the timeout, try increase the timeout --net_read_timeout=100. On the DB server.

Python-Flask-Postgresql DB sql Locks

I tried looking in the documentation of Flask and SQLAlchemy but I couldn't find the answer...
Do the sessions used with SQLAlchemy automatically lock the DB? I do a read (let's say it takes a long time) and it seems to block out any other reads I am trying to do on my Postgres DB. Is this something I have to configure manually on Postgres using constraints or is there something I can change with Flask / SQLAlchemy?
Thanks

Categories