This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Django cache.set() causing duplicate key error
I ran into this problem using django core's database cache:
ERROR: duplicate key value violates unique constraint "cache_pkey"
STATEMENT: INSERT INTO "cache" (cache_key, value, expires) VALUES (E':1:cms-menu_nodes_en-us_1', E'gAJdcQEoY21lbnVzLmJhc2UKTmF2aW
LOG: server process (PID 8453) was terminated by signal 9: Killed
LOG: terminating any other active server processes
LOG: all server processes terminated; reinitializing
FATAL: could not create shared memory segment: Cannot allocate memory
DETAIL: Failed system call was shmget(key=5432001, size=29278208, 03600).
I looked in the table and sure enough, there is an entry for the key ':1:cms-menu_nodes_en-us_1'. I found a similar issue here, but was unable to exactly understand what the issue is.
Anyone have any ideas or suggestions? Sounds like a bug in django core, since if a key exist, it should update the record.
edit: I should have clarified that the DB was PostgreSQL 8.4.7. Thanks lazerscience.
edit # Jack M: I haven't been able to replicate this error, but believe the code is in django.core.cache.backends.db.DatabaseCache in a method called set() that calls _base_set().
Sounds like a bug in django core, since if a key exist, it should update the record.
Indeed, but I'd suggest that said bug is related to a concurrency issue, in which case it could be fixed at the app level. As in two neighboring calls to the same asset/page/whatever run an exist() statement, find no row, and proceed to insert as a result -- without issuing a lock of any kind, and without wrapping the thing in a transaction to discard the offending call and (since it's just a cache) continuing.
It also begs one question: are you sure that you should be caching in your database in the first place? The database typically is a bottleneck in a web application (especially when using an ORM), and the whole point of caching is to avoid that bottleneck. Shouldn't you be using memcache instead?
Related
We are having issues recently with our prod servers connecting to Oracle. Intermittently we are getting "DatabaseError: ORA-12502: TNS:listener received no CONNECT_DATA from client". This issue is completely random and goes away in a second by itself and it's not a Django problem, can replicate it with SQLPlus from the servers.
We opened ticket with Oracle support but in the meantime i'm wondering if it's possible to simply retry any DB-related operation when it fails.
The problem is that i can't use try/catch blocks in the code to handle this since this can happen on ANY DB interaction in the entire codebase. I have to do this at a lower level so that i do it only once. Is there any way to install an error handler or something like that directly on django.db.backends.oracle level so that it will cover all the codebase? Basically, all i want to do is this:
try:
execute_sql()
catch:
if code == ORA-12502:
sleep 1 second
#re-try the same operation
exexute_sql()
Is this even possible or I'm out of luck?
Thanks!
While using cx_Oracle(Python), the code goes into waiting when the the following statement is executed:
some_connection.execute(some_sql)
What could be the reason?
Without seeing the actual SQL in question it is hard to know for sure. Some possible answers include:
1) the SQL actually takes a long time to execute (and you just have to be patient)
2) the SQL is blocked by another transaction (and that transaction needs to be committed or rolled back first)
You can find out by examining the contents of dba_locks, specifically looking at the blocking_others column. You can also attempt to issue the same SQL in SQL*Plus and see if it exhibits the same behaviour.
I have a one year production site configured with django.contrib.sessions.backends.cached_db backend with a MySQL database backend. The reason why I chose cached_db is a mix of security with read performance.
The problem is, the cleanup command, responsible to delete all expired sessions, was never executed, resulting in a 2.3GB session table data length, 6 million rows and 500Mb index length.
When I try to run the ./manage.py cleanup (in Django 1.3) command, or ./manage.py clearsessions (Django`s 1.5 correspondent), the process never ends (or my patience doesn't complete 3 hours).
The code that Django use's to do this is:
Session.objects.filter(expire_date__lt=timezone.now()).delete()
In a first impression, I think that's normal because the table has 6M rows, but, after I inspect System's monitor, I discover that all memory and cpu was used by the python process, not mysqld, fullfilling my machine's resources. I think that's something terrible wrong with this command code. It seems that python iterates over all founded expired session rows before deleting each of them, one by one. In this case, a code refactoring to just raw a DELETE FROM command can resolve my problem and helps Django community, right? But, if this is the case, a Queryset delete command is acting weird and none optimized in my opinion. Am I right?
I have built little custom web framework on top of Python 3.2 using Cherrypy to built WSGI application and SQLAlchemy Core (just for connection pooling and executing text SQL statements).
Versions I am using:
Python: 3.2.3
CherryPy: 3.2.2
SQL Alchemy: 0.7.5
Psycopg2: 2.4.5
For every request, a DB connection is retrieved from pool using sqlalchemy.engine.base.Engine´s connect method. After request handler finishes, the connection is closed using close method. Pseudocode for example:
with db.connect() as db:
handler(db)
Where db.connect() is context manager defined like this:
#contextmanager
def connect(self):
conn = self.engine.connect()
try:
yield conn
finally:
conn.close()
I hope that this is correct practice for doing this task. It worked until things went more complicated in page handlers.
I am getting weird behavior. Because of uknown reason, connection is sometimes closed before the handler finishes it´s work. But not every time!
By observation, this happens only when making requests quickly consecutively. If I make small pause between requests, the connection is not closed and request is finished successfully. But anyway, this does not happen every time. I have not found more specific pattern in failures/successes of requests.
I observed that the connection is not closed by my context manager. It is already closed at that point.
My question:
How to figure out when, why and by what code is my connection closed?
I tried debugging. I put breakpoint on sqlalchemy.engine.base.Connection´s close method but the connection is closed before it reach this code. Which is weird.
I will appreciate any tips or help.
*edit *
Information requested by zzzeek:
symptom of the "connection being closed":
Sorry for not clarifying this before. It is the sqlalchemy.engine.Connection that is closed.
In handlers I am calling sqlalchemy.engine.base.Connection´s execute method to get data from database (select statements). I can say that sqlalchemy.engine.Connection is closed, because I am checking it's closed property before calling execute.
I can post here traceback, but only thing that you will probably see in it is that Exception is raised before the execute in my DB wrapper library (because connection is closed).
If I remove this check (and let the execute method execute), SQLAlchemy raises this exception: http://pastebin.com/H6052yca
Regarding the concurency problem that zzzeek mentioned. I must apologize. After more observation the situation is slightly different.
This is exact procedure how to invoke the error:
Request for HandlerA. Everything ok.
Wait moment (about 10-20s).
Request for HandlerB. Everything ok.
Request for HandlerA. Everything ok.
Immediate request for HandlerB. Error!
Immediate request for HandlerB. Error!
Immediate request for HandlerB. Error!
Wait moment (about 10-20s).
Request for HandlerB. Everything ok.
I am using default SQLAlchemy pooling class with pool_size = 5.
I know that you cannot do miracles when you don't have the actual code. But unfortunately, I cannot share it. Is there any best practice for debugging this type of error? Or the only option is to debug more deeply step by step and try to figure it out?
Another observation:
When I start the server in debugger (WingIDE), I cannot bring up the error. Probably because the the debugger is so slow when interpreting the code, that the connection is somehow "repaired" before second request (RequestB) is handled.
After daylong debugging. I found out the problem.
Unfortunatelly it was not related to SQLAlchemy directly. So the question should be deleted. But you guys tried to help me, so I will answer my own question. And maybe, somebody will find this helpfull some day.
Basically, Error was caused by my custom publish/subscribe methods which did not play nicely in multi threaded enviorment.
I tried stepping code line by line... which was not working (as i described in the question). So I started generating very detailed log of what is going on.
Even then, everything looked normal, until I noticed that few lines before crash, the address of Connection object referenced in the model changed. Which practically meant that something assigned another Connection object to model and that connection object was already closed.
So the lesson is. When everything looks correct, print out / log the repr() of objects which are problematic.
Thanks to commenters for their time.
I'm using MySQLdb in Python.
I have an update that may succeed or fail:
UPDATE table
SET reserved_by = PID
state = "unavailable"
WHERE state = "available"
AND id = REQUESTED_ROW_ID
LIMIT 1;
As you may be able to infer, multiple processes are using the database, and I need processes to be able to securely grab rows for themselves, without race conditions causing problems.
My theory (perhaps incorrect) is that only one process will be able to succeed with this query (.rowcount=1) -- the others will fail (.rowcount=0) or get a different row (.rowcount=1).
The problem is, it appears that everything that happens through MySQLdb happens in a virtual world -- .rowcount reads =1, but you can't really know whether anything really happened, until you perform a .commit().
My questions:
In MySQL, is a single UPDATE atomic within itself? That is, if the same UPDATE above (with different PID values, but the same REQUESTED_ROW_ID) were sent to the same MySQL server at "once," am I guaranteed that one will succeed and the other will fail?
Is there a way to know, after calling "conn.commit()", whether there was a meaningful change or not?
** Can I get a reliable .rowcount for the actual commit operation?
Does the .commit operation send the actual query (SET's and WHERE conditions intact,) or does it just perform the SETs on affected rows, independent the WHERE clauses that inspired them?
Is my problem solved neatly by .autocommit?
Turn autocommit on.
The commit operation just "confirms" updates already done. The alternative is rollback, which "undoes" any updates already made.