I have accidentally dropped all databases from my mongo db. Then i tried to insert a document in new database. It throws error "Unable to persist transaction state because the session transaction collection is missing. This indicates that the config.transactions collection has been manually deleted."
My Sample code:
doc_client = MongoClient(host=host,
port=port,
connect=True, # Connect on first operation to avoid multi-threading related errors
j=True, # Requests only return once write has hit the DB journal
)
print(doc_client.database_names()) # It works fine
doc_client['test'].insert({'a': 'ss'}) # Throws Error
I have accidentally dropped all databases from my mongo db
It is likely that you have also dropped config.transactions collection. This is a collection for internal usage that stores records used to support retryable writes for replica sets and sharded clusters. See also Config Databases.
Since MongoDB v3.6+, users won't be able to drop the config database in replica set from mongo shell. Although if you are connecting using mongo shell prior to v3.6, you're still able to do so, please ensure to upgrade the shell to match the server version.
"Unable to persist transaction state because the session transaction collection is missing. This indicates that the config.transactions collection has been manually deleted."
You can manually re-create the collection on the primary node:
use config
db.createCollection("transactions");
Alternatively, a replica set election would also automatically re-creates it. This is because the creation of config.transactions collection is part of a replica set node step up. session_catalog_mongod.cpp#L156
The new config.transactions collection will be replicated to the secondaries after the primary completed the catch up phase.
Related
I'm having a problem with the sessions in my python/wsgi web app. There is a different, persistent mysqldb connection for each thread in each of 2 wsgi daemon processes. Sometimes, after deleting old sessions and creating a new one, some connections still fetch the old sessions in a select, which means they fail to validate the session and ask for login again.
Details: Sessions are stored in an InnoDB table in a local mysql database. After authentication (through CAS), I delete any previous sessions for that user, create a new session (insert a row), commit the transaction, and redirect to the originally requested page with the new session id in the cookie. For each request, a session id in the cookie is checked against the sessions in the database.
Sometimes, a newly created session is not found in the database after the redirect. Instead, the old session for that user is still there. (I checked this by selecting and logging all of the sessions at the beginning of each request). Somehow, I'm getting cached results. I tried selecting the sessions with SQL_NO_CACHE, but it made no difference.
Why am I getting cached results? Where else could the caching occur, and how can stop it or refresh the cache? Basically, why do the other connections fail to see the newly inserted data?
MySQL defaults to the isolation level "REPEATABLE READ" which means you will not see any changes in your transaction that were done after the transaction started - even if those (other) changes were committed.
If you issue a COMMIT or ROLLBACK in those sessions, you should see the changed data (because that will end the transaction that is "in progress").
The other option is to change the isolation level for those sessions to "READ COMMITTED". Maybe there is an option to change the default level as well, but you would need to check the manual for that.
Yes, it looks like the assumption is that you are only going to perform a single transaction and then disconnect. If you have a different need, then you need to work around this assumption. As mentioned by #a_horse_with_no_name, you can put in a commit (though I would use a rollback if you are not actually changing data). Or you can change the isolation level on the cursor - from this discussion I used this:
dbcursor.execute("SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED")
Or, it looks like you can set auto commit to true on the connection:
dbconn.autocommit(True)
Though, again, this is not recommended if actually making changes in the connection.
I'm having a problem with the sessions in my python/wsgi web app. There is a different, persistent mysqldb connection for each thread in each of 2 wsgi daemon processes. Sometimes, after deleting old sessions and creating a new one, some connections still fetch the old sessions in a select, which means they fail to validate the session and ask for login again.
Details: Sessions are stored in an InnoDB table in a local mysql database. After authentication (through CAS), I delete any previous sessions for that user, create a new session (insert a row), commit the transaction, and redirect to the originally requested page with the new session id in the cookie. For each request, a session id in the cookie is checked against the sessions in the database.
Sometimes, a newly created session is not found in the database after the redirect. Instead, the old session for that user is still there. (I checked this by selecting and logging all of the sessions at the beginning of each request). Somehow, I'm getting cached results. I tried selecting the sessions with SQL_NO_CACHE, but it made no difference.
Why am I getting cached results? Where else could the caching occur, and how can stop it or refresh the cache? Basically, why do the other connections fail to see the newly inserted data?
MySQL defaults to the isolation level "REPEATABLE READ" which means you will not see any changes in your transaction that were done after the transaction started - even if those (other) changes were committed.
If you issue a COMMIT or ROLLBACK in those sessions, you should see the changed data (because that will end the transaction that is "in progress").
The other option is to change the isolation level for those sessions to "READ COMMITTED". Maybe there is an option to change the default level as well, but you would need to check the manual for that.
Yes, it looks like the assumption is that you are only going to perform a single transaction and then disconnect. If you have a different need, then you need to work around this assumption. As mentioned by #a_horse_with_no_name, you can put in a commit (though I would use a rollback if you are not actually changing data). Or you can change the isolation level on the cursor - from this discussion I used this:
dbcursor.execute("SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED")
Or, it looks like you can set auto commit to true on the connection:
dbconn.autocommit(True)
Though, again, this is not recommended if actually making changes in the connection.
Referring to the example in Django documentation for multiple databases in one application,
https://docs.djangoproject.com/en/dev/topics/db/multi-db/#an-example
" It also doesn’t consider the interaction of transactions with the database utilization strategy. "
How do I handle the interaction stated above.
The scenario is this:
I am using postgresql as my database. I have setup up a replica and want all the reads to "auth" tables to go to replica. Following the documentation I wrote a database router. Now whenever I try to log in to my application is throws the following error.
DatabaseError: cannot execute UPDATE in a read-only transaction.
This happens when Django tries to save the "last_login" time. Because, in the same view it first fetches the record from replica, and then tries to update the last_login time. Since it happens in one transaction so the same database is used, i.e. replica.
How do I handle this?
Thoughts?
I am currently working on a new web application that needs to execute an SQL statement before giving a session to the application itself.
In detail: I am running a PostgreSQL database server with multiple schemas and I need to execute a SET search_path statement before the application uses the session. I am also using the ZopeTransactionExtension to have transactions automatically handled at the request level.
To ensure the exectuion of the SQL statement, there seem to be two possible ways:
Executing the statement at the Engine/Connection level via SQLAlchemy events (from Multi-tenancy with SQLAlchemy)
Executing the statement at the session level (from SQLAlchemy support of Postgres Schemas)
Since I am using a scoped session and want to keep my transactions intact, I wonder which of these ways will possibly disturb transaction management.
For example, does the Engine hand out a new connection from the Pool on every query? Or is it attached to the session for its lifetime, i.e. until the request has been processed and the session & transaction are closed/committed?
On the other hand, since I am using a scoped session, can I perform it the way zzzeek suggested it in the second link? That is, is the context preserved and automatically reset once the transaction is over?
Is there possibly a third way that I am missing?
For example, does the Engine hand out a new connection from the Pool on every query?
only if you have autocommit=True, which should not be the case.
Or is it attached to the session for its lifetime, i.e. until the request has been processed and the session & transaction are closed/committed?
it's attached per transaction. But the "search_path" in Postgresql is per postgresql session (not to be confused with SQLAlchemy session) - its basically the lifespan of the connection itself.
The session (and the engine, and the pool) these days has a ton of event hooks you can grab onto in order to set up state like this. If you want to stick with the Session you can try after_begin.
I am building a small system which throws data from a mongodb collection, it already works fine but I have to restart it everytime I make changes.
I already have a monitor that dectect changes and restarts the server automatically but I want to do something like this with mongodb changes.
I am currenlty using CentOs 5, Nginx, uWsgi & python2.7.
I'd look into using tailable cursors, which remain alive after they've reached the end of a collection, and can block until a new object is available.
Using PyMongo, you can call Collection.find with a tailable=True option to enable this behavior. This blog post gives some good examples of its usage.
Additionally, instead of just querying the collection, which will only alert you to new objects added to that collection, you may want to query the database's oplog, which is a collection of all insert, updates, and deletes called on any collection in the database. Note that replication must be enabled for mongo to keep an oplog. Check out this blog post for info about the oplog and enabling replication.