Is pymysql connection thread safe? Is pymysql cursor thread safe? - python

I have a queue data structure where multiple threads consume items, each thread is going to write to a database using PyMySQL, no other sync is needed among threads.
Is it race free to use the same cursor coming from the same pymysql connection in all threads?
Is it race free to use different cursor per thread coming from the same connection?
(of course to use multiple connections in multiple threads is ok, because in this case there is no shared resource, I have no interest in this case)

Thanks to El Ruso, for point one direction
I found the answer in the pymysql source after understand the DBAPI2 has a specification indicating how to answer this question depending on the implementation. In case of PyMySQL it means it is not threadsafe for connections nor cursors.
https://github.com/PyMySQL/PyMySQL/blob/master/pymysql/init.py#L40 Line
PyMySQL is threadsafe = 1 means: Threads may share the module, but not connections.
(Read the PEP-0249 specification http://legacy.python.org/dev/peps/pep-0249/#threadsafety)

Related

Is a MySQLdb cursor for Python blocking in nature by default?

I have a MySQLdb installation for Python 2.7.6. I have created a MySQLdb cursor once and would like to reuse the cursor for every incoming request. If 100 users are simultaneously active and doing a db query, does the cursor serve each request one by one and block others?
If that is the case, is there way to avoid that? Will having a connection pool will do the job in a threadsafe manner or should I look at Gevent/monkey patching?
Your responses are welcome.
You will want to use a connection pool.
The mysql driver in python is not thread-safe meaning multiple requests/threads cannot use it at the same time. See more here:
Here is a link on how to implement a connection-pool:
It essentially works by keeping a number of connections (a pool) ready, and gives one out to each thread. When the thread is done, it returns the connection to the pool and another request/thread can use it.
For this purpose you can use Persistence Connection or Connection Pool.
Persistence Connection - very very very bad idea. Don't use use it! Just don't! Especially when you are talking about web programming.
Connection Pool - Better then Persistence Connection, but with no deep understanding of how it works, you will end with the same problems of Persistence Connection.
Don't do optimization unless you really have performance problems. In web, its common to open/close connection per page request. It works really fast. You better think about optimizing sql queries, indexes, caches.

Concurrent writes with SQLite and Peewee

I'm planning to use SQLite and Peewee (ORM) for a light duty internal web service (<20 requests per second). The web service can handle multiple simultaneous requests on multiple threads. During each request the database will be both read from and written to. This means I will need to have the ability for both concurrent reads AND writes. It doesn't matter to this application if the data changes between reads and writes.
The SQLite FAQ says that concurrent reads are permitted but concurrent writes from multiple threads require acquiring the file lock. My question is: Does Peewee take care of this locking for me or is there something I need to do in my code to make this possible?
The Peewee database object is shared between threads. I assume this means that the database connection is shared too.
I can't find a Peewee specific answer to this so I'm asking here.
Sqlite is the one doing the locking, although I can see how you might be confused -- the FAQ wording is a bit ambiguous:
When any process wants to write, it must lock the entire database file for the duration of its update. But that normally only takes a few milliseconds. Other processes just wait on the writer to finish then continue about their business. Other embedded SQL database engines typically only allow a single process to connect to the database at once.
So if you have two threads, each with their own connection, and one acquires the write lock, the other thread will have to wait for the lock to be released before it can start writing.
Looking at pysqlite, the default busy timeout looks to be 5 seconds, so the second thread should wait up to 5 seconds before raising an OperationalError.
Also, I'd suggest instantiating your SqliteDatabase with threadlocals=True. That will store a connection-per-thread.
Consider to run all writing operations within 1 async process. This made the Javascript server programming nowadays so famous (although this idea is know far longer). It just needs that you a bit familiar with asynchronous programming concept of callbacks:
For SQLITE:
Async concept directly in Sqlite: https://www.sqlite.org/asyncvfs.html
APSW (Another Sqlite Wrapper) which better supports SQlite extentions in Peewee: http://peewee.readthedocs.org/en/latest/peewee/playhouse.html#apsw
For ANY DB.
Consider to write your own thin async handler in python,
as solved here e.g.
SQLAlchemy + Requests Asynchronous Pattern
I would recommend you the last approach, as this allows you more code portability, control, independance from the backend database engine and scalability.

Segmentation fault error in a multi threaded app in python

I have a multi threaded app in python, wherein I create multiple producer threads and they extract the data from DB. Data is extracted in chunks. So the part where a thread creates sql statement with limit values is kept within lock. And to let threads execute queries simultaneously, query() function is kept outside the lock. Then the result fetching part is again kept under the lock. Below is the code snippet:
with UserAgent.lock:
sqlGeoTarget = "call sp_ax_ari_select_user_agent_list('0'," + str(self.chunkStart) + "," + str(self.chunkSize) + ",1);"
self.chunkStart += self.chunkSize
self.dbObj.query(sqlGeoTarget)
print "query executed. Processing data now..."+sqlGeoTarget
with UserAgent.lock:
result = self.dbObj.fetchAll()
self.dbObj.dbCursor.close()
But this code generates fatal error segmentation fault (core dumped). Because if I put all the code under lock, it executes fine. I explicitly close the cursor after fetching the data, it is reopened when query() function fired again.
This code is inside a class named UserAgent and it's a shared resource for a class named Producer. Thus, database object is shared. So the problem area 99% must be that as the db object is shared hitting query simultaneously and closing cursor then must be messing up with result set. But then how to solve this problem and achieve concurrent db query execution?
Do not reuse connections across threads. Create a new connection for each thread instead.
From the MySQLdb User Guide:
The MySQL protocol can not handle multiple threads using the same connection at once. Some earlier versions of MySQLdb utilized locking to achieve a threadsafety of 2. While this is not terribly hard to accomplish using the standard Cursor class (which uses mysql_store_result()), it is complicated by SSCursor (which uses mysql_use_result(); with the latter you must ensure all the rows have been read before another query can be executed. It is further complicated by the addition of transactions, since transactions start when a cursor execute a query, but end when COMMIT or ROLLBACK is executed by the Connection object. Two threads simply cannot share a connection while a transaction is in progress, in addition to not being able to share it during query execution. This excessively complicated the code to the point where it just isn't worth it.
The general upshot of this is: Don't share connections between threads. It's really not worth your effort or mine, and in the end, will probably hurt performance, since the MySQL server runs a separate thread for each connection. You can certainly do things like cache connections in a pool, and give those connections to one thread at a time. If you let two threads use a connection simultaneously, the MySQL client library will probably upchuck and die. You have been warned.
Emphasis mine.
Use thread local storage or a dedicated connection pooling library instead.

error when use multithreading and mysqldb

getting error when multithreader program access data .
Exception in thread Thread-2:
ProgrammingError: (2014, "Commands out of sync; you can't run this command now")
Exception in thread Thread-3:
ProgrammingError: execute() first
According to PEP 249, data access modules have a module level constant threadsafety:
Integer constant stating the level of thread safety the interface
supports. Possible values are:
0 Threads may not share the module.
1 Threads may share the module, but not connections.
2 Threads may share the module and connections.
3 Threads may share the module, connections and
cursors.
Sharing in the above context means that two threads may use a resource
without wrapping it using a mutex semaphore to implement resource
locking. Note that you cannot always make external resources thread
safe by managing access using a mutex: the resource may rely on global
variables or other external sources that are beyond your control.
According to MySQLdb User's Guide, the module supports level 1.
The MySQL protocol can not handle multiple threads using the same
connection at once. Some earlier versions of MySQLdb utilized locking
to achieve a threadsafety of 2. While this is not terribly hard to
accomplish using the standard Cursor class (which uses
mysql_store_result()), it is complicated by SSCursor (which uses
mysql_use_result(); with the latter you must ensure all the rows have
been read before another query can be executed. It is further
complicated by the addition of transactions, since transactions start
when a cursor execute a query, but end when COMMIT or ROLLBACK is
executed by the Connection object. Two threads simply cannot share a
connection while a transaction is in progress, in addition to not
being able to share it during query execution. This excessively
complicated the code to the point where it just isn't worth it.
The general upshot of this is: Don't share connections between
threads. It's really not worth your effort or mine, and in the end,
will probably hurt performance, since the MySQL server runs a separate
thread for each connection. You can certainly do things like cache
connections in a pool, and give those connections to one thread at a
time. If you let two threads use a connection simultaneously, the
MySQL client library will probably upchuck and die. You have been
warned.
here is detail about the error: http://dev.mysql.com/doc/refman/5.0/en/commands-out-of-sync.html.
Mysqldb's manual suggest these:
Don't share connections between threads. It's really not worth your effort or mine, and in the end, will probably hurt performance, since the MySQL server runs a separate thread for each connection. You can certainly do things like cache connections in a pool, and give those connections to one thread at a time. If you let two threads use a connection simultaneously, the MySQL client library will probably upchuck and die. You have been warned.
For threaded applications, try using a connection pool. This can be done using the Pool module
See more information search keyword threadsafety on MySQLdb manual,
Thanks to your few informations, I can only guess.
Probably you access the database from several threads without locking. That's bad.
You should hold a lock to a threading.Lock() or threading.RLock() while accessing your DB. This prevents several threads from interfering with other thread's actions.

Python sqlite3 and concurrency

I have a Python program that uses the "threading" module. Once every second, my program starts a new thread that fetches some data from the web, and stores this data to my hard drive. I would like to use sqlite3 to store these results, but I can't get it to work. The issue seems to be about the following line:
conn = sqlite3.connect("mydatabase.db")
If I put this line of code inside each thread, I get an OperationalError telling me that the database file is locked. I guess this means that another thread has mydatabase.db open through a sqlite3 connection and has locked it.
If I put this line of code in the main program and pass the connection object (conn) to each thread, I get a ProgrammingError, saying that SQLite objects created in a thread can only be used in that same thread.
Previously I was storing all my results in CSV files, and did not have any of these file-locking issues. Hopefully this will be possible with sqlite. Any ideas?
Contrary to popular belief, newer versions of sqlite3 do support access from multiple threads.
This can be enabled via optional keyword argument check_same_thread:
sqlite.connect(":memory:", check_same_thread=False)
You can use consumer-producer pattern. For example you can create queue that is shared between threads. First thread that fetches data from the web enqueues this data in the shared queue. Another thread that owns database connection dequeues data from the queue and passes it to the database.
The following found on mail.python.org.pipermail.1239789
I have found the solution. I don't know why python documentation has not a single word about this option. So we have to add a new keyword argument to connection function
and we will be able to create cursors out of it in different thread. So use:
sqlite.connect(":memory:", check_same_thread = False)
works out perfectly for me. Of course from now on I need to take care
of safe multithreading access to the db. Anyway thx all for trying to help.
Switch to multiprocessing. It is much better, scales well, can go beyond the use of multiple cores by using multiple CPUs, and the interface is the same as using python threading module.
Or, as Ali suggested, just use SQLAlchemy's thread pooling mechanism. It will handle everything for you automatically and has many extra features, just to quote some of them:
SQLAlchemy includes dialects for SQLite, Postgres, MySQL, Oracle, MS-SQL, Firebird, MaxDB, MS Access, Sybase and Informix; IBM has also released a DB2 driver. So you don't have to rewrite your application if you decide to move away from SQLite.
The Unit Of Work system, a central part of SQLAlchemy's Object Relational Mapper (ORM), organizes pending create/insert/update/delete operations into queues and flushes them all in one batch. To accomplish this it performs a topological "dependency sort" of all modified items in the queue so as to honor foreign key constraints, and groups redundant statements together where they can sometimes be batched even further. This produces the maxiumum efficiency and transaction safety, and minimizes chances of deadlocks.
You shouldn't be using threads at all for this. This is a trivial task for twisted and that would likely take you significantly further anyway.
Use only one thread, and have the completion of the request trigger an event to do the write.
twisted will take care of the scheduling, callbacks, etc... for you. It'll hand you the entire result as a string, or you can run it through a stream-processor (I have a twitter API and a friendfeed API that both fire off events to callers as results are still being downloaded).
Depending on what you're doing with your data, you could just dump the full result into sqlite as it's complete, cook it and dump it, or cook it while it's being read and dump it at the end.
I have a very simple application that does something close to what you're wanting on github. I call it pfetch (parallel fetch). It grabs various pages on a schedule, streams the results to a file, and optionally runs a script upon successful completion of each one. It also does some fancy stuff like conditional GETs, but still could be a good base for whatever you're doing.
Or if you are lazy, like me, you can use SQLAlchemy. It will handle the threading for you, (using thread local, and some connection pooling) and the way it does it is even configurable.
For added bonus, if/when you realise/decide that using Sqlite for any concurrent application is going to be a disaster, you won't have to change your code to use MySQL, or Postgres, or anything else. You can just switch over.
You need to use session.close() after every transaction to the database in order to use the same cursor in the same thread not using the same cursor in multi-threads which cause this error.
Use threading.Lock()
I could not find any benchmarks in any of the above answers so I wrote a test to benchmark everything.
I tried 3 approaches
Reading and writing sequentially from the SQLite database
Using a ThreadPoolExecutor to read/write
Using a ProcessPoolExecutor to read/write
The results and takeaways from the benchmark are as follows
Sequential reads/sequential writes work the best
If you must process in parallel, use the ProcessPoolExecutor to read in parallel
Do not perform any writes either using the ThreadPoolExecutor or using the ProcessPoolExecutor as you will run into database locked errors and you will have to retry inserting the chunk again
You can find the code and complete solution for the benchmarks in my SO answer HERE Hope that helps!
Scrapy seems like a potential answer to my question. Its home page describes my exact task. (Though I'm not sure how stable the code is yet.)
I would take a look at the y_serial Python module for data persistence: http://yserial.sourceforge.net
which handles deadlock issues surrounding a single SQLite database. If demand on concurrency gets heavy one can easily set up the class Farm of many databases to diffuse the load over stochastic time.
Hope this helps your project... it should be simple enough to implement in 10 minutes.
I like Evgeny's answer - Queues are generally the best way to implement inter-thread communication. For completeness, here are some other options:
Close the DB connection when the spawned threads have finished using it. This would fix your OperationalError, but opening and closing connections like this is generally a No-No, due to performance overhead.
Don't use child threads. If the once-per-second task is reasonably lightweight, you could get away with doing the fetch and store, then sleeping until the right moment. This is undesirable as fetch and store operations could take >1sec, and you lose the benefit of multiplexed resources you have with a multi-threaded approach.
You need to design the concurrency for your program. SQLite has clear limitations and you need to obey them, see the FAQ (also the following question).
Please consider checking the value of THREADSAFE for the pragma_compile_options of your SQLite installation. For instance, with
SELECT * FROM pragma_compile_options;
If THREADSAFE is equal to 1, then your SQLite installation is threadsafe, and all you gotta do to avoid the threading exception is to create the Python connection with checksamethread equal to False. In your case, it means
conn = sqlite3.connect("mydatabase.db", checksamethread=False)
That's explained in some detail in Python, SQLite, and thread safety
The most likely reason you get errors with locked databases is that you must issue
conn.commit()
after finishing a database operation. If you do not, your database will be write-locked and stay that way. The other threads that are waiting to write will time-out after a time (default is set to 5 seconds, see http://docs.python.org/2/library/sqlite3.html#sqlite3.connect for details on that).
An example of a correct and concurrent insertion would be this:
import threading, sqlite3
class InsertionThread(threading.Thread):
def __init__(self, number):
super(InsertionThread, self).__init__()
self.number = number
def run(self):
conn = sqlite3.connect('yourdb.db', timeout=5)
conn.execute('CREATE TABLE IF NOT EXISTS threadcount (threadnum, count);')
conn.commit()
for i in range(1000):
conn.execute("INSERT INTO threadcount VALUES (?, ?);", (self.number, i))
conn.commit()
# create as many of these as you wish
# but be careful to set the timeout value appropriately: thread switching in
# python takes some time
for i in range(2):
t = InsertionThread(i)
t.start()
If you like SQLite, or have other tools that work with SQLite databases, or want to replace CSV files with SQLite db files, or must do something rare like inter-platform IPC, then SQLite is a great tool and very fitting for the purpose. Don't let yourself be pressured into using a different solution if it doesn't feel right!

Categories