While debugging a unit test in Django, I placed the following statement in my code
Student.objects.filter(id__in=[1,2]).delete()
...
import pdb; pdb.set_trace()
At the breakpoint, if I type Student.objects.count(), I can see that it has reduced due to delete however if I open psql (PostgreSQL command line) and check the test_database, I can still see the rows (which have been deleted according to Django). Why do I see this inconsistency between Django ORM & the database. Does the ORM cache my queries? How can I make it commit to the database at the breakpoint?
Update:
Quick solution for debugging. Add the following lines at breakpoint to see the data in psql. Thanks to #DanielRoseman for the tip.
from django.db import connection
connection.cursor().execute('commit;')
Django requests usually run inside a transaction. Your psql session won't see the changes until the transaction is committed when the session returns.
Related
I have a caching problem when I use sqlalchemy.
I use sqlalchemy to insert data into a MySQL database. Then, I have another application process this data, and update it directly.
But sqlalchemy always returns the old data rather than the updated data. I think sqlalchemy cached my request ... so ... how should I disable it?
The usual cause for people thinking there's a "cache" at play, besides the usual SQLAlchemy identity map which is local to a transaction, is that they are observing the effects of transaction isolation. SQLAlchemy's session works by default in a transactional mode, meaning it waits until session.commit() is called in order to persist data to the database. During this time, other transactions in progress elsewhere will not see this data.
However, due to the isolated nature of transactions, there's an extra twist. Those other transactions in progress will not only not see your transaction's data until it is committed, they also can't see it in some cases until they are committed or rolled back also (which is the same effect your close() is having here). A transaction with an average degree of isolation will hold onto the state that it has loaded thus far, and keep giving you that same state local to the transaction even though the real data has changed - this is called repeatable reads in transaction isolation parlance.
http://en.wikipedia.org/wiki/Isolation_%28database_systems%29
This issue has been really frustrating for me, but I have finally figured it out.
I have a Flask/SQLAlchemy Application running alongside an older PHP site. The PHP site would write to the database and SQLAlchemy would not be aware of any changes.
I tried the sessionmaker setting autoflush=True unsuccessfully
I tried db_session.flush(), db_session.expire_all(), and db_session.commit() before querying and NONE worked. Still showed stale data.
Finally I came across this section of the SQLAlchemy docs: http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#transaction-isolation-level
Setting the isolation_level worked great. Now my Flask app is "talking" to the PHP app. Here's the code:
engine = create_engine(
"postgresql+pg8000://scott:tiger#localhost/test",
isolation_level="READ UNCOMMITTED"
)
When the SQLAlchemy engine is started with the "READ UNCOMMITED" isolation_level it will perform "dirty reads" which means it will read uncommited changes directly from the database.
Hope this helps
Here is a possible solution courtesy of AaronD in the comments
from flask.ext.sqlalchemy import SQLAlchemy
class UnlockedAlchemy(SQLAlchemy):
def apply_driver_hacks(self, app, info, options):
if "isolation_level" not in options:
options["isolation_level"] = "READ COMMITTED"
return super(UnlockedAlchemy, self).apply_driver_hacks(app, info, options)
Additionally to zzzeek excellent answer,
I had a similar issue. I solved the problem by using short living sessions.
with closing(new_session()) as sess:
# do your stuff
I used a fresh session per task, task group or request (in case of web app). That solved the "caching" problem for me.
This material was very useful for me:
When do I construct a Session, when do I commit it, and when do I close it
This was happening in my Flask application, and my solution was to expire all objects in the session after every request.
from flask.signals import request_finished
def expire_session(sender, response, **extra):
app.db.session.expire_all()
request_finished.connect(expire_session, flask_app)
Worked like a charm.
I have tried session.commit(), session.flush() none worked for me.
After going through sqlalchemy source code, I found the solution to disable caching.
Setting query_cache_size=0 in create_engine worked.
create_engine(connection_string, convert_unicode=True, echo=True, query_cache_size=0)
First, there is no cache for SQLAlchemy.
Based on your method to fetch data from DB, you should do some test after database is updated by others, see whether you can get new data.
(1) use connection:
connection = engine.connect()
result = connection.execute("select username from users")
for row in result:
print "username:", row['username']
connection.close()
(2) use Engine ...
(3) use MegaData...
please folowing the step in : http://docs.sqlalchemy.org/en/latest/core/connections.html
Another possible reason is your MySQL DB is not updated permanently. Restart MySQL service and have a check.
As i know SQLAlchemy does not store caches, so you need to looking at logging output.
Keep getting this warning using a MySQL database:
Some non-transactional changed tables couldn't be rolled back
I'm not sure what it means or if it is even causing a problem but I was hoping someone would be able to fill me in on what this means.
I am taking a CSV file, reading it line-by-line and creating Django objects using get_or_create. After I get the message, when I try to recreate it, I get further into the CSV file before the warning occurs.
I tried reading about this error online but I really don't understand what it means. It would be ideal to figure out whats causing this but if I can't I am wondering if I can suppress the warning because maybe it isn't effect my database negatively.
This happens when you mix transactional and non-transactional tables. Changes to non- transactional tables are not effected by a ROLLBACK statement.
For some reasons this may have happened to you we can turn to the docs:
if you were not deliberately mixing transactional and nontransactional tables within the transaction, the most likely cause for this message is that a table you thought was transactional actually is not. This can happen if you try to create a table using a transactional storage engine that is not supported by your mysqld server (or that was disabled with a startup option). If mysqld does not support a storage engine, it instead creates the table as a MyISAM table, which is nontransactional.
This will effect things negatively if you say have an HTTP request that kicks of a transaction, you make some changes, and you need to rollback. The transactional tables will rollback but the others will not. If a transactional storage engine is a requirement for your software you should consider taking steps to migrate all the relevant tables to the InnoDB engine.
For me this error happened after I imported a table from another Django application. The origin DB had all the table engines set to MyISAM and the destination app had all the engines set as InnoDB. When I imported the existing table the engine was changed from InnoBD to MyISAM to match the source. I resolved this using MySQL on the command line like so:
$ mysql -uroot -pPASSWORD
> use MY_DB;
> show table status;
> alter table TABLE_WITH_MYISAM engine=innodb;
> quit;
I had imported 5 tables so I had to do the alter command for each table. The show command above will print out the table names and engine settings for all tables in MY_DB.
I hope this helps solve your issue! Cheers!
I'm currently working on a django application. I can't add an element to my database on the admin view. I fill all the information but when I click on save button but the operation doesn't finish and I get a timeout. I use sqlite3 as database.
My question is there any one that know the origin of this problem. If not how could I investigate the problem. When I worked with other language (Java, C ...etc) when I have a problem I can use a debugger. What are the options I have?
This Problem can occur because of following reasons:
(Less Probable) You computation code is too Slow: Which is a rarity because the Timeout is set to about 1 minute or so, and code doesn't take that time to Execute
Your app is waiting on some external resource but it is not Responding. For this you will have to check for the Django logs and check if some external resource error is there
(Most Probable) Database taking too much time: This can occur either because:
App can't connect to Database: For this you have to check database logs OR try and connect manually with database through python manage.py dbshell
DB Query Taking so much time to execute: You can test this by checking database logs for how much time a query is taking OR you can connect manually via dbshell and make the same query there
Your can also use tools Like django-profiler , Django debug toolbar etc for debugging purposes. and for native python code python debugger
I'm using SQLAlchemy with Flask as shown here: http://flask.pocoo.org/docs/patterns/sqlalchemy/
I have a Selenium test suite that first runs with Firefox and then with Chrome.
Before the start of tests on each browsers, tables in the test database (PostgreSQL) are dropped and created.
It runs perfectly for the first browsers, but for the second browser the SQL create / drop attempt just freezes and no errors are shown.
I believe this is because of open SQLAlchemy sessions, is that correct?
I believe this is because of open SQLAlchemy sessions, is that correct?
That's most likely the case. To confirm it, connect to the postgres database and run SELECT * FROM pg_stat_activity;
I'm not sure how you handle the DB creation/dropping but you may want to call dispose() and possibly recreate() on the SQLAlchemy connection pool, after making sure that any checked out connection has been returned (for example, with session.close()).
This is something that happens to me too on running Flask unittest with SQLAlchemy and Postgres. Many a times, the culprit is an exception, which did not propagate upwards and got stuck. This exception also stops the test from cleaning up properly and hence the freeze.
If you are creating a test suite then call debug method on the suit and it will show the exception. Linked the docs of this method here.
Your observation of an open Sqlalchemy session can also be a reason. I will test a theory of mine based on this observation tomorrow. If it clears some doubts then I will post here.
Look at this answer that shows how you can trigger debugger on exception. Maybe it can help pinpoint problem.
I figured I would make this a different question for the sake of being tidy. It is based on:
SQLAlchemy won't update my database and SQLAlchemy session: how to keep it alive?.
So here's the deal: I have a Pyramid application that's talking to a daemon, which in turn talks to a database.
Now for some reason stuff isn't getting committed to the database when I add it to the database session variable, as in:
DBSession.add(ModelInstance)
Calling flush or commit doesn't make it commit.
This is how I make DBSession:
settings = {
'sqlalchemy.url':'blah blah'
}
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
engine = engine_from_config(settings, 'sqlalchemy.')
DBSession.configure(bind=engine)
This seems fine to me because I can query the database fine. ie: this sort of thing works:
DBSession.query(ModelClass).get(id)
This fine gentleman https://stackoverflow.com/users/100297/martijn-pieters suggested the use of the following little bit of code:
import transaction
transaction.commit()
And that worked fine for making sure that my stuff got committed. The only problem is that it somehow renders my DBSession useless. So if I want to use the objects that my session is keeping track of I need to re instantiate the session and those items. This sucks. It takes up a whole lot of time.
My question is, in short, how can I avoid this?
And in long:
How do I get my DBSession to commit properly without breaking it?
OR
How do I fix my DBSession and associated model instances without the need for lengthly database calls?
AND
Any idea why this is happening? I have successfully constructed a DBSession in the same way inside the Pyramid app I mentioned and it worked totally fine, it committed when I wanted it to and everything.
For details of the errors I encountered please refer to the two questions I mentioned in the beginning
In SQLAlchemy sessions expire the objects they are managing upon commit. This is sane because after commit you have no guarantees in a concurrent world that something else isn't changing the state they are attempting to mirror in the database.
Pyramid recommends the use of a transaction manager that helps you maintain a single transaction per request. It will automatically call transaction.commit() for you after the request is complete. In this way, you don't have to think/worry about objects expiring, and transactions are properly aborted if your code raises an exception.
The way to setup the transaction manager is by installing pyramid_tm and zope.sqlalchemy and then connecting your DBSession to zope.sqlalchemy.ZopeTransactionExtension. Then things will "just work".
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
# ...
def main(global_conf, **settings):
config = Configurator(...)
config.include('pyramid_tm')
# ...
If you need to populate a new object's primary key or ensure that some SQL will execute properly you can use DBSession.flush() to execute the SQL within your transaction without actually committing it. Any errors will be raised there for you to catch and deal with.
This basic setup of your sessions is described within the tutorial in the Pyramid documentation:
http://docs.pylonsproject.org/projects/pyramid/en/1.4-branch/tutorials/wiki2/basiclayout.html
Update: I realized I kind of answered your question about using the transaction manager in Pyramid, which you are already using successfully. I think that the answer also clearly explains what's going on with the ZopeTransactionExtension, however, and you just need confirmation about committing transactions. You'd be wise to simply use one transaction in your script, which you can create via
import transaction
with transaction.manager:
# do tons of database stuff
Now if an exception happens the transaction will be aborted, and if not it will be committed.
According to sqlalchemy's doc
Another behavior of commit() is that by default it expires the state
of all instances present after the commit is complete. This is so that
when the instances are next accessed, either through attribute access
or by them being present in a Query result set, they receive the most
recent state. To disable this behavior, configure sessionmaker() with
expire_on_commit=False.
What seems like a problem is actually done on purpose : when you commit, all the objects are marked as expired. This is done so that you don't keep using old cached values that may have changed in the database.
The reason it works in pyramid is because each request has its own transaction, and queries object before working on them. You try to use object from a preview transaction, which might not be a good idea, because their may not be in sync with the database.
To solve your problem, you can either make sure you don't reuse object after the end of a transaction (may you need your transactions to include more things), or you can use expire_on_commit=False as advised in the transaction. But if you use the latter, be aware that the object may be outdated.
If I'm reading this correctly, I think I know what is going on...
The downside of the pyramid/zope transaction managers is that they're all or nothing - due to the way they're implemented, you can't both use them and call commit(). I don't remember exactly why, but I once dug into all their code after fighting with this for hours , and there just wasn't a way to commit within the page.
For a variety of reasons I decided to not use the automatic transaction wrapping in my apps. i had a lot of scenarios where I want one or more commits , or needed to use savepoints (i use postgresql).