Python MySQL queue: run code/queries in parallel, but commit separately - python

Program 1 inserts some jobs into a table job_table.
Program 2 needs to :
1. get the job from the table
2. handle the job
-> this needs to be multi-threaded (because each job involves urllib waiting time, which should run in parallel)
3. insert the results into my_other_table, commiting the result
Any good (standard?) ways to implement this? The issue is that commiting inside one thread, also commits the other threads.

I was able to pick the records from the mysql table and put them in queue later get them from queue but not able to insert into a new mysql table.
Here i am able to pick up only the new records when ever they fall into the table.
Hope this may help you.
Any mistakes please assist me.
from threading import Thread
import time
import Queue
import csv
import random
import pandas as pd
import pymysql.cursors
from sqlalchemy import create_engine
import logging
queue = Queue.Queue(1000)
logging.basicConfig(level=logging.DEBUG, format='(%(threadName)-9s) %(message)s', )
conn = pymysql.connect(conn-details)
cursor = conn.cursor()
class ProducerThread(Thread):
def run(self):
global queue
cursor.execute("SELECT ID FROM multi ORDER BY ID LIMIT 1")
min_id = cursor.fetchall()
min_id1 = list(min_id[0])
while True:
cursor.execute("SELECT ID FROM multi ORDER BY ID desc LIMIT 1")
max_id = cursor.fetchall()
max_id1 = list(max_id[0])
sql = "select * from multi where ID between '{}' and '{}'".format(min_id1[0], max_id1[0])
cursor.execute(sql)
data = cursor.fetchall()
min_id1[0] = max_id1[0] + 1
for row in data:
num = row
queue.put(num) # acquire();wait()
logging.debug('Putting ' + str(num) + ' : ' + str(queue.qsize()) + ' items in queue')
class ConsumerThread(Thread):
def run(self):
global queue
while True:
num = queue.get()
print num
logging.debug('Getting ' + str(num) + ' : ' + str(queue.qsize()) + ' items in queue')
**sql1 = """insert into multi_out(ID,clientname) values ('%s','%s')""",num[0],num[1]
print sql1
# cursor.execute(sql1, num)
cursor.execute("""insert into multi_out(ID,clientname) values ('%s','%s')""",(num[0],num[1]))**
# conn.commit()
# conn.close()
def main():
ProducerThread().start()
num_of_consumers = 20
for i in range(num_of_consumers):
ConsumerThread().start()
main()

What probably happens is you share the MySQL connection between the two threads. Try creating a new MySQL connection inside each thread.
For program 2, look at http://www.celeryproject.org/ :)

This is a common task when doing some sort of web crawling. I have implemented a single thread which grabs a job, waits for the http response, then writes the response to a database table.
The problems I have come across with my method, is you need to lock the table where you are grabbing jobs from, and mark them as in progress or complete, in order for multiple threads to not try and grab the same task.
Just used threading.Thread in python and override the run method.
Use 1 database connection per thread. (some db libraries in python are not thread safe)

If you have X number of threads running, periodically reading from the jobs table then MySQL will do the concurrency for you.
Or if you need even more assurance, you can always lock the jobs table yourself before reading the next available entry. This way you can be 100% sure that a single job will only be processed once.
As #Martin said, keep connections separate for all threads. They can use the same credentials.
So in short:
Program one -> Insert into jobs
Program two -> Create a write lock on the jobs table, so no one else can read from it
Program two -> read next available job
Program two -> Unlock the table
Do everything else as usual, MySQL will handle concurrency

Related

peewee savepoint does not exist

I'm using peewee to interface a MySQL database. I have a list of entries which must be inserted into database and updated in case they're already present there. I'm using create_or_get function for this. I also use threading to speed up the process; code looks like this:
# pool is just a map wrapper around standard threading module
pool = utils.ThreadPool()
for page in xrange(0, pages):
pool.add_task(self.update_page, page)
pool.wait_completion()
def update_page(self, num):
for entry in self.get_entries_from_page(num):
self.push_entry(entry)
def push_entry(self, entry):
with _db.execution_context():
result, new = EntryModel.create_or_get(**entry) # << error here
if not new :
if entry['date'] > result.date:
result.hits += 1
result.date = track['date']
result.save()
Database initialization:
_db.initialize(playhouse.pool.PooledMySQLDatabase(n, user = u, passwd = w, host = h, port = p))
Everything was running smoothly, but all of sudden I began to receive a lot of errors on the mentioned line:
(1305, 'SAVEPOINT s449cd5a8d165440aaf47b205e2932362 does not exist')
Savepoint number changes every time and data is not being written to database. Recreating database did not help. What can lead to this error?
Try removing autocommit=True during database connection create.

Python- performance issue with mysql and google play subscription

I am trying to extract the purchase_state from google play, following the steps bellow:
import base64
import requests
import smtplib
from collections import OrderedDict
import mysql.connector
from mysql.connector import errorcode
......
Query db ,returning thousand of lines with purchase_idfield from my table
Check for each row from db, and extract purchase_id, then query google play for all of them. for example if the results of my previous
step is 1000, 1000 times is querying google (refresh token + query).
Add a new field purchase status from the google play to a new dictionary apart from some other fields whcih are grabbed from mysql query.
The last step is doing a loop over my dic as a follow and prepare the desirable report
AFTER EDITED:
def build_dic_from_db(data,access_token):
dic = {}
for row in data:
product_id = row['product_id']
purchase_id = row['purchase_id']
status = check_purchase_status(access_token, product_id,purchase_id)
cnt = 1
if row['user'] not in dic:
dic[row['user']] = {'id':row['user_id'],'country': row['country_name'],'reg_ts': row['user_registration_timestamp'],'last_active_ts': row['user_last_active_action_timestamp'],
'total_credits': row['user_credits'],'total_call_sec_this_month': row['outgoing_call_seconds_this_month'],'user_status':row['user_status'],'mobile':row['user_mobile_phone_number_num'],'plus':row['user_assigned_msisdn_num'],
row['product_id']:{'tAttemp': cnt,'tCancel': status}}
else:
if row['product_id'] not in dic[row['user']]:
dic[row['user']][row['product_id']] = {'tAttemp': cnt,'tCancel':status}
else:
dic[row['user']][row['product_id']]['tCancel'] += status
dic[row['user']][row['product_id']]['tAttemp'] += cnt
return dic
The problem is that my code is working slowly ~ Total execution time: 448.7483880519867 and I am wondering if there is away to improve my script. Is there any suggestion?
I hope I'm right about this, but the bottleneck seems to be the connection to the playstore. Doing it sequentially will take a long time, whereas the server is able to take a million requests at a time. So here's a way to process your jobs with executors (you need the "concurrent" package installed)
In that example, you'll be able to send 100 requests at the same time.
from concurrent import futures
EXECUTORS = futures.ThreadPoolExecutor(max_workers=100)
jobs = dict()
for row in data:
product_id = row['product_id']
purchase_id = row['purchase_id']
job = EXECUTORS.submit(check_purchase_status,
access_token, product_id,purchase_id)
jobs[job] = row
for job in futures.as_completed(jobs.keys()):
# here collect your results and do something useful with them :)
status = job.result()
# make the connection with current row
row = jobs[job]
# now you have your status and the row
And BTW try to use temp variables or you're constantly accessing your dictionary with the same keys, which is not good for performance AND readability of your code.

Python threading and SQL

My questions basically is is there a best practice approach to db interaction and am I doing something silly / wrong in the below that is costing processing time.
My program pulls data from a website and writes to a SQL database. Speed is very important and I want to be able to refresh the data as quickly as possible. I've tried a number of ways and I feel its still way too slow - i.e. could be much better with a better approach / design to interaction with the db and I'm sure I'm making all sorts of mistakes. I can download the data to memory very quickly but the writes to the db take much much longer.
The 3 main approaches I've tried are:
Threads that pull the data and populate a list of SQL commands, when
threads complete run sql in main thread
Threads that pull data and push to SQL (as per below code)
Threads that pull data and populate a q with separate thread(s)
polling the q and pushing to the db.
Code as below:
import MySQLdb as mydb
class DatabaseUtility():
def __init__(self):
"""set db parameters"""
def updateCommand(self, cmd):
"""run SQL commands and return number of matched rows"""
try:
self.cur.execute(cmd)
return int(re.search('Rows matched: (\d+)', self.cur._info).group(1))
except Exception, e:
print ('runCmd error: ' + str(e))
print ('With SQL: ' + cmd)
return 0
def addCommand(self, cmd):
"""write SQL command to db"""
try:
self.cur.execute(cmd)
return self.cur.rowcount
except Exception, e:
print ('runCmd error: ' + str(e))
print ('With SQL: ' + cmd)
return 0
I've created a class that instantiates a db connection and is called as below:
from Queue import Queue
from threading import Thread
import urllib2
import json
from databasemanager import DatabaseUtility as dbU
from datalinks import getDataLink, allDataLinks
numThreads = 3
q = Queue()
dbu = dbU()
class OddScrape():
def __init__(self, name, q):
self.name = name
self.getOddsData(self.name, q)
def getOddsData(self, i, q):
"""Worker thread - parse each datalink and update / insert to db"""
while True:
#get datalink, create db connection
self.dbu = dbU()
matchData = q.get()
#load data link using urllib2 and do a bunch of stuff
#to parse the data to the required format
#try to update in db and insert if not found
sql = "sql to update %s" %(params)
update = self.dbu.updateCommand(sql)
if update < 1:
sql = "sql to insert" %(params)
self.dbu.addCommand(sql)
q.task_done()
self.dbu.dbConClose()
print eventlink
def threadQ():
#set up some threads
for i in range(numThreads):
worker = Thread(target=OddScrape, args=(i, q,))
worker.start()
#get urldata for all matches required and add to q
matchids = dbu.runCommand("sql code to determine scope of urls")
for match in matchids:
sql = "sql code to get url data %s" %match
q.put(dbu.runCommand(sql))
q.join()
I've also added an index to the table I'm writing too which seemed to help a tiny bit but not noticeably:
CREATE INDEX `idx_oddsdata_bookid_datalinkid`
ON `dbname`.`oddsdata` (bookid, datalinkid) COMMENT '' ALGORITHM DEFAULT LOCK DEFAULT;
Multiple threads implies multiple connections. Although getting a connection is "fast" in MySQL, it is not instantaneous. I do not know the relative speed of getting a connection versus running a query, but I doubt if you multi-threaded idea will win.
Could you show us examples of the actual queries (SQL, not python code) you need to run. We may have suggestions on combining queries, improved indexes, etc. Please provide SHOW CREATE TABLE, too. (You mentioned a CREATE INDEX, but it is useless out of context.)
It looks like you are doing a multi-step process that could be collapsed into INSERT ... ON DUPLICATE KEY UPDATE ....

Python code not executing in order? MySQLdb UPDATE commits in unexpected order

I've got a Python 2.7 script I'm working on that retrieves rows from a MySQL table, loops through the data to process it, then is supposed to do the following things in this order:
UPDATE the table rows we just got previously to set a locked value
in each row to TRUE
After the UPDATE query executes and commits via MySQLdb, a ThreadPool of processes should run on the data from the original loop.
What's actually happening is that the UPDATE query seems to be committing somehow after the ThreadPool is done. I tried refactoring it into a try/finally statement to make sure, but now it either still does it afterwards, or just doesn't commit the UPDATE and runs the ThreadPool anyways.
It's a head scratcher, to be sure. I assume I'm just doing something really wrong and obvious but not catching it after looking at this for so long. Any input is greatly appreciated!
Here's the gist:
from multiprocessing.pool import ThreadPool, IMapIterator
import MySQLdb as mdb
import os, sys, time
import re
from boto.s3.connection import S3Connection
from boto.s3.bucket import Bucket
...
con = mdb.connect('localhost', 'user', 'pass', 'db')
with con:
cur = con.cursor()
cur.execute("SELECT preview_queue.filename, preview_queue.product_id, preview_queue.track, products.name, preview_queue.id FROM preview_queue join `catalog_module-products` AS products on products.id = preview_queue.product_id where locked != 1")
rows = cur.fetchall()
mp3s_to_download = []
lock_ids = []
last_directory = ""
if len(rows) > 0:
for row in rows:
base_dir = str(get_base_dir(row[1], row[3]))
mp3s_to_download.append([base_dir, str(row[0])])
if last_directory != "preview_temp/"+base_dir:
if not os.path.exists("preview_temp/"+base_dir):
try:
os.makedirs("preview_temp/"+base_dir)
except OSError, e:
pass
last_directory = "preview_temp/"+base_dir
lock_ids.append(str(row[4]))
if len(lock_ids) > 0:
action_ids = ','.join(lock_ids)
try:
cur.execute("UPDATE preview_queue SET locked = 1 WHERE id IN ({})".format(action_ids))
con.commit()
finally:
pool = ThreadPool(processes=20)
pool.map(download_file, mp3s_to_download)
cur.close()
A finally clause is guaranteed to execute, even if the try clause raises an exception. What is probably happening here is that an exception is being raised, preventing the update from being committed, but the threads are triggered anyway.
This doesn't really seen to be an appropriate use of try/finally. Rather, use a normal try/except to catch and log any exceptions, and then perhaps use an else clause to start the threads only if no exception was raised.

Python, SQLite and threading

I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP.
So I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database.
Then in the main thread start a CherryPy application that will query that SQLite database and serve the data.
My problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application?
If I'd do a connection per thread to the database will I also be able to create/use an in memory database?
Short answer: Don't use Sqlite3 in a threaded application.
Sqlite3 databases scale well for size, but rather terribly for concurrency. You will be plagued with "Database is locked" errors.
If you do, you will need a connection per thread, and you have to ensure that these connections clean up after themselves. This is traditionally handled using thread-local sessions, and is performed rather well (for example) using SQLAlchemy's ScopedSession. I would use this if I were you, even if you aren't using the SQLAlchemy ORM features.
This test is being done to determine the best way to write and read from SQLite database. We follow 3 approaches below
Read and write without any threads (the methods with the word normal on it)
Read and write with Threads
Read and write with processes
Our sample dataset is a dummy generated OHLC dataset with a symbol, timestamp, and 6 fake values for ohlc and volumefrom, volumeto
Reads
Normal method takes about 0.25 seconds to read
Threaded method takes 10 seconds
Processing takes 0.25 seconds to read
Winner: Processing and Normal
Writes
Normal method takes about 1.5 seconds to write
Threaded method takes about 30 seconds
Processing takes about 30 seconds
Winner: Normal
Note: All records are not written using the threaded and processed write methods. Threaded and processed write methods obviously run into database locked errors as the writes are queued up
SQlite only queues up writes to a certain threshold and then throws sqlite3.OperationalError indicating database is locked.
The ideal way is to retry inserting the same chunk again but there is no point as the method execution for parallel insertion takes more tine than a sequential read even without retrying
the locked/failed inserts
Without retrying, 97% of the rows were written and still took 10x more time than a sequential write
Strategies to takeaway:
Prefer reading SQLite and writing it in the same thread
If you must do multithreading, use multiprocessing to read which has more or less the same performance and defer to single threaded write operations
DO NOT USE THREADING for reads and writes as it is 10x slower on both, you can thank the GIL for that
Here is the code for the complete test
import sqlite3
import time
import random
import string
import os
import timeit
from functools import wraps
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
import threading
import os
database_file = os.path.realpath('../files/ohlc.db')
create_statement = 'CREATE TABLE IF NOT EXISTS database_threading_test (symbol TEXT, ts INTEGER, o REAL, h REAL, l REAL, c REAL, vf REAL, vt REAL, PRIMARY KEY(symbol, ts))'
insert_statement = 'INSERT INTO database_threading_test VALUES(?,?,?,?,?,?,?,?)'
select = 'SELECT * from database_threading_test'
def time_stuff(some_function):
def wrapper(*args, **kwargs):
t0 = timeit.default_timer()
value = some_function(*args, **kwargs)
print(timeit.default_timer() - t0, 'seconds')
return value
return wrapper
def generate_values(count=100):
end = int(time.time()) - int(time.time()) % 900
symbol = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10))
ts = list(range(end - count * 900, end, 900))
for i in range(count):
yield (symbol, ts[i], random.random() * 1000, random.random() * 1000, random.random() * 1000, random.random() * 1000, random.random() * 1e9, random.random() * 1e5)
def generate_values_list(symbols=1000,count=100):
values = []
for _ in range(symbols):
values.extend(generate_values(count))
return values
#time_stuff
def sqlite_normal_read():
"""
100k records in the database, 1000 symbols, 100 rows
First run
0.25139795300037804 seconds
Second run
Third run
"""
conn = sqlite3.connect(os.path.realpath('../files/ohlc.db'))
try:
with conn:
conn.execute(create_statement)
results = conn.execute(select).fetchall()
print(len(results))
except sqlite3.OperationalError as e:
print(e)
#time_stuff
def sqlite_normal_write():
"""
1000 symbols, 100 rows
First run
2.279409104000024 seconds
Second run
2.3364172020001206 seconds
Third run
"""
l = generate_values_list()
conn = sqlite3.connect(os.path.realpath('../files/ohlc.db'))
try:
with conn:
conn.execute(create_statement)
conn.executemany(insert_statement, l)
except sqlite3.OperationalError as e:
print(e)
#time_stuff
def sequential_batch_read():
"""
We read all the rows for each symbol one after the other in sequence
First run
3.661222331999852 seconds
Second run
2.2836898810001003 seconds
Third run
0.24514851899994028 seconds
Fourth run
0.24082150699996419 seconds
"""
conn = sqlite3.connect(os.path.realpath('../files/ohlc.db'))
try:
with conn:
conn.execute(create_statement)
symbols = conn.execute("SELECT DISTINCT symbol FROM database_threading_test").fetchall()
for symbol in symbols:
results = conn.execute("SELECT * FROM database_threading_test WHERE symbol=?", symbol).fetchall()
except sqlite3.OperationalError as e:
print(e)
def sqlite_threaded_read_task(symbol):
results = []
conn = sqlite3.connect(os.path.realpath('../files/ohlc.db'))
try:
with conn:
results = conn.execute("SELECT * FROM database_threading_test WHERE symbol=?", symbol).fetchall()
except sqlite3.OperationalError as e:
print(e)
finally:
return results
def sqlite_multiprocessed_read_task(symbol):
results = []
conn = sqlite3.connect(os.path.realpath('../files/ohlc.db'))
try:
with conn:
results = conn.execute("SELECT * FROM database_threading_test WHERE symbol=?", symbol).fetchall()
except sqlite3.OperationalError as e:
print(e)
finally:
return results
#time_stuff
def sqlite_threaded_read():
"""
1000 symbols, 100 rows per symbol
First run
9.429676861000189 seconds
Second run
10.18928106400017 seconds
Third run
10.382290903000467 seconds
"""
conn = sqlite3.connect(os.path.realpath('../files/ohlc.db'))
symbols = conn.execute("SELECT DISTINCT SYMBOL from database_threading_test").fetchall()
with ThreadPoolExecutor(max_workers=8) as e:
results = e.map(sqlite_threaded_read_task, symbols, chunksize=50)
for result in results:
pass
#time_stuff
def sqlite_multiprocessed_read():
"""
1000 symbols, 100 rows
First run
0.2484774920012569 seconds!!!
Second run
0.24322178500005975 seconds
Third run
0.2863524549993599 seconds
"""
conn = sqlite3.connect(os.path.realpath('../files/ohlc.db'))
symbols = conn.execute("SELECT DISTINCT SYMBOL from database_threading_test").fetchall()
with ProcessPoolExecutor(max_workers=8) as e:
results = e.map(sqlite_multiprocessed_read_task, symbols, chunksize=50)
for result in results:
pass
def sqlite_threaded_write_task(n):
"""
We ignore the database locked errors here. Ideal case would be to retry but there is no point writing code for that if it takes longer than a sequential write even without database locke errors
"""
conn = sqlite3.connect(os.path.realpath('../files/ohlc.db'))
data = list(generate_values())
try:
with conn:
conn.executemany("INSERT INTO database_threading_test VALUES(?,?,?,?,?,?,?,?)",data)
except sqlite3.OperationalError as e:
print("Database locked",e)
finally:
conn.close()
return len(data)
def sqlite_multiprocessed_write_task(n):
"""
We ignore the database locked errors here. Ideal case would be to retry but there is no point writing code for that if it takes longer than a sequential write even without database locke errors
"""
conn = sqlite3.connect(os.path.realpath('../files/ohlc.db'))
data = list(generate_values())
try:
with conn:
conn.executemany("INSERT INTO database_threading_test VALUES(?,?,?,?,?,?,?,?)",data)
except sqlite3.OperationalError as e:
print("Database locked",e)
finally:
conn.close()
return len(data)
#time_stuff
def sqlite_threaded_write():
"""
Did not write all the results but the outcome with 97400 rows written is still this...
Takes 20x the amount of time as a normal write
1000 symbols, 100 rows
First run
28.17819765000013 seconds
Second run
25.557972323000058 seconds
Third run
"""
symbols = [i for i in range(1000)]
with ThreadPoolExecutor(max_workers=8) as e:
results = e.map(sqlite_threaded_write_task, symbols, chunksize=50)
for result in results:
pass
#time_stuff
def sqlite_multiprocessed_write():
"""
1000 symbols, 100 rows
First run
30.09209805699993 seconds
Second run
27.502465319000066 seconds
Third run
"""
symbols = [i for i in range(1000)]
with ProcessPoolExecutor(max_workers=8) as e:
results = e.map(sqlite_multiprocessed_write_task, symbols, chunksize=50)
for result in results:
pass
sqlite_normal_write()
You can use something like that.
"...create several threads that will gather data at a specified interval and cache that data locally into a sqlite database.
Then in the main thread start a CherryPy app that will query that sqlite db and serve the data."
Don't waste a lot of time on threads. The things you're describing are simply OS processes. Just start ordinary processes to do gathering and run Cherry Py.
You have no real use for concurrent threads in a single process for this. Gathering data at a specified interval -- when done with simple OS processes -- can be scheduled by the OS very simply. Cron, for example, does a great job of this.
A CherryPy App, also, is an OS process, not a single thread of some larger process.
Just use processes -- threads won't help you.
Depending on the application the DB could be a real overhead. If we are talking about volatile data, maybe you could skip the communication via DB completely and share the data between the data gathering process and the data serving process(es) via IPC. This is not an option if the data has to be persisted, of course.
Depending on the data rate sqlite could be exactly the correct way to do this. The entire database is locked for each write so you aren't going to scale to 1000s of simultaneous writes per second. But if you only have a few it is the safest way of assuring you don't overwrite each other.

Categories