Speed up python w/ sqlalchemy function - python

I have a function that populates a database table using python and sqlalchemy. The function is running fairly slowly right now, taking around 17 minutes. I think the main problem is I am looping through two large sets of data to build the new table. I have included the record count in the code below.
How can I speed this up? Should I try to convert the nested for loop into one big sqlalchemy query? I profiled this function with pycharm but am not sure I fully understand the results.
def populate(self):
"""Core function to populate positions."""
# get raw annotations with tag Org
# returns 11,659 records
organizations = model.session.query(model.Annotation) \
.filter(model.Annotation.tag == 'Org')\
.filter(model.Annotation.organization_id.isnot(None)).all()
# get raw annotations with tags Support or Oppose
# returns 2,947 records
annotations = model.session.query(model.Annotation) \
.filter((model.Annotation.tag == 'Support') | (model.Annotation.tag == 'Oppose')).all()
for org in organizations:
for anno in annotations:
# Org overlaps with Support or Oppose tag
# start and end columns are integers
if org.start >= anno.start and org.end <= anno.end:
position = model.Position()
# set to de-duplicated organization
position.organization_id = org.organization_id
position.disposition = anno.tag
# look up bill_id from document_bill table
document = model.session.query(model.document_bill)\
.filter_by(document_id=anno.document_id).first()
position.bill_id = document.bill_id
position.document_id = anno.document_id
model.session.add(position)
logging.info('org: {}, disposition: {}, bill: {}'.format(
position.organization_id, position.disposition, position.bill_id)
)
continue
logging.info('committing to database')
model.session.commit()

My bets, in order of descending probability:
Autocommit is ON, so you're waiting for disk.
The query inside the loop "document = model.session.query(model.document_bill)...." is slow (use EXPLAIN ANALYZE).
most of the time is actually spent printing logs to the terminal in the inner loop (you should profile)
model.session.add(position) is slow (no idea what that does)
(and this one should really be first) Could a SQL query like INSERT INTO SELECT do this in a couple tens of milliseconds? If so, why make a loop in the application?...

Related

MySQL SQLALCHEMY Python Getting Max Count for Timestamp

I have data recorded for several timestamps ... I want to get the max amount of all timestamps.
This is my code:
for timestamp in timestamps:
count = db.query(models.Appointment.id).filter(models.Appointment.place == place) \
.filter(models.Appointment.date == date) \
.filter(models.Appointment.timestamp == timestamp).count()
data.append(count)
return max(data)
Sadly, it takes timestamps * 1.5 seconds to calculate that requested value.
Is there any possibility (a query) which can handle this in around 3-10 seconds?
Regards,
Martin
If using MySQL 8 and later, you could give the following a go:
return db.query(func.max(func.count()).over()).\
filter(models.Appointment.place == place).\
filter(models.Appointment.date == date).\
filter(models.Appointment.timestamp.in_(timestamps)).\
group_by(models.Appointment.timestamp).\
limit(1).\
scalar()
This uses the (slightly non obvious) fact that window functions are evaluated after forming group rows, and without a partition and order the window is over all the group rows.
If using a version of MySQL that does not yet support window functions, use a subquery instead:
counts = db.query(func.count().label('count')).\
filter(models.Appointment.place == place).\
filter(models.Appointment.date == date).\
filter(models.Appointment.timestamp.in_(timestamps)).\
group_by(models.Appointment.timestamp).\
subquery()
return db.query(func.max(counts.c.count)).scalar()
The difference in these to the original approach is that both make only a single trip to the database. That is usually desirable, but may require thinking a bit differently about the problem, due to SQL being a (more or less) declarative language – you mostly describe the answer you want, not how you want it✝.
✝ "I want coffee" vs. "Start by pouring some water in the..."

Simple script not printing

I'm trying to sort a collection, and then print the first 5 docs to make sure it has worked:
#!/user/bin/env python
import pymongo
# Establish a connection to the mongo database.
connection = pymongo.MongoClient('mongodb://localhost')
# Get a handle to the students database.
db = connection.school
students = db.students
def order_homework():
projection = {'scores': {'$elemMatch': {'type': 'homework'}}}
cursor = students.find({}, projection)
# Sort each item's scores.
for each in cursor:
each['scores'].sort()
# Sort by _id.
cursor = sorted(cursor, key=lambda x: x['_id'])
# Print the first five items.
count = 0
for each in cursor:
print(each)
count += 1
if count == 5:
break
if __name__ == '__main__':
order_homework()
When I run this, nothing prints.
If I take out the sorts, then it prints.
Each sort works when run individually.
Please teach me what I'm doing wrong / educate me.
You're trying to treat the cursor like a list, which you can iterate several times from the start. PyMongo cursors don't act that way - once you've iterated it in for each in cursor, the cursor is completed and you can't iterate it again.
You can turn the cursor into a list like:
data = list(students.find({}, projection))
For efficiency, get results pre-sorted from MongoDB:
list(students.find({}, projection).sort('_id'))
This sends the sort criterion to the server, which then streams the results back to you pre-sorted, instead of requiring you to do it client-side. Now delete your "Sort by _id" line below.

Ordering objects by rating accounting for the number or ratings

I'm trying to do something similar to the first response in this SO question: SQL ordering by rating/votes, where resources may be rated (one rating per user per resource), but when ordering the resources based on their ratings, any resources with fewer than X separate ratings will appear below those with X or more.
I'm implementing this in Django and I'd very much prefer to avoid the use of raw query and keep within the Django model and query framework.
So far, this is what I have:
data = []
data_top = Resource.objects.all().annotate(rating=Avg('resourcerating__rating'),rate_count=Count('resourcerating')).exclude(rate_count__lt=settings.ORB_RESOURCE_MIN_RATINGS).order_by(order_by)
for d in data_top:
data.append(d)
data_bottom = Resource.objects.all().annotate(rating=Avg('resourcerating__rating'),rate_count=Count('resourcerating')).exclude(rate_count__gte=settings.ORB_RESOURCE_MIN_RATINGS).order_by(order_by)
for d in data_bottom:
data.append(d)
This all functions and returns the ordering by rating as I need, however, it doesn't feel very efficient - what with running 2 queries and looping over the results of each.
Is there a better way I can code this, either in a single query, or at least avoiding looping though each query set?
Any help much appreciated.
from itertools import chain
main_query = Resource.objects.all().annotate(rating=Avg('resourcerating__rating'),rate_count=Count('resourcerating'))
data_top_query = main_query.exclude(rate_count__lt=settings.ORB_RESOURCE_MIN_RATINGS).order_by(order_by)
data_bottom_query = main_query.exclude(rate_count__gte=settings.ORB_RESOURCE_MIN_RATINGS).order_by(order_by)
data = list(chain(data_top_query, data_bottom_query))
Using itertools.chain is faster than looping each list and appending elements one by one
Also, the querysets will get evaluated when list is called on them (as they don't hit the database till then)
FYI, the above will hit the db twice when evaluated.
You're currently querying twice and iterating twice, but you can cut it down to one and one easily-just query for the items ordered by rating, then iterate like this:
data_top = []
data_bottom = []
data = Resource.objects.all().annotate(rating=Avg('resourcerating__rating'),rate_count=Count('resourcerating')).order_by(order_by)
for d in data:
if data.rate_count >= settings.ORB_RESOURCE_MIN_RATINGS:
data_top.append(d)
else:
data_bottom.append(d)
data = data_top + data_bottom
This can also be done with the query only, by creating another aggregate column which contains the value rate_count < settings.ORB_RESOURCE_MIN_RATINGS (return 0 for values above or at the threshold, 1 for below) and sorting on (new_column, rating). Pretty sure this would require some custom SQL, but perhaps someone else knows otherwise.

How can I make my code more efficient?

I have a list of tuples that contains a tool_id, a time, and a message. I want to select from this list all the elements where the message matches some string, and all the other elements where the time is within some diff of any matching message for that tool.
Here is how I am currently doing this:
# record time for each message matching the specified message for each tool
messageTimes = {}
for row in cdata: # tool, time, message
if self.message in row[2]:
messageTimes[row[0], row[1]] = 1
# now pull out each message that is within the time diff for each matched message
# as well as the matched messages themselves
def determine(tup):
if self.message in tup[2]: return True # matched message
for (tool, date_time) in messageTimes:
if tool == tup[0]:
if abs(date_time-tup[1]) <= tdiff:
return True
return False
cdata[:] = [tup for tup in cdata if determine(tup)]
This code works, but it takes way too long to run - e.g. when cdata has 600,000 elements (which is typical for my app) it takes 2 hours for this to run.
This data came from a database. Originally I was getting just the data I wanted using SQL, but that was taking too long also. I was selecting just the messages I wanted, then for each one of those doing another query to get the data within the time diff of each. That was resulting in tens of thousands of queries. So I changed it to pull all the potential matches at once and then process it in python, thinking that would be faster. Maybe I was wrong.
Can anyone give me some suggestions on speeding this up?
Updating my post to show what I did in SQL as was suggested.
What I did in SQL was pretty straightforward. The first query was something like:
SELECT tool, date_time, message
FROM event_log
WHERE message LIKE '%foo%'
AND other selection criteria
That was fast enough, but it may return 20 or 30 thousand rows. So then I looped through the result set, and for each row ran a query like this (where dt and t are the date_time and tool from a row from the above select):
SELECT date_time, message
FROM event_log
WHERE tool = t
AND ABS(TIMESTAMPDIFF(SECOND, date_time, dt)) <= timediff
That was taking about an hour.
I also tried doing in one nested query where the inner query selected the rows from my first query, and the outer query selected the time diff rows. That took even longer.
So now I am selecting without the message LIKE '%foo%' clause and I am getting back 600,000 rows and trying to pull out the rows I want from python.
The way to optimize the SQL is to do it all in one query, instead of iterating over 20K rows and doing another query for each one.
Usually this means you need to add a JOIN, or occasionally a sub-query. And yes, you can JOIN a table to itself, as long as you rename one or both copies. So, something like this:
SELECT el2.date_time, el2.message
FROM event_log as el1 JOIN event_log as el2
WHERE el1.message LIKE '%foo%'
AND other selection criteria
AND el2.tool = el1.tool
AND ABS(TIMESTAMPDIFF(SECOND, el2.datetime, el1.datetime)) <= el1.timediff
Now, this probably won't be fast enough out of the box, so there are two steps to improve it.
First, look for any columns that obviously need to be indexed. Clearly tool and datetime need simple indices. message may benefit from either a simple index or, if your database has something fancier, maybe something fancier, but given that the initial query was fast enough, you probably don't need to worry about it.
Occasionally, that's sufficient. But usually, you can't guess everything correctly. And there may also be a need to rearrange the order of the queries, etc. So you're going to want to EXPLAIN the query, and look through the steps the DB engine is taking, and see where it's doing a slow iterative lookup when it could be doing a fast index lookup, or where it's iterating over a large collection before a small collection.
For tabular data, you can't go past the Python pandas library, which contains highly optimised code for queries like this.
I fixed this by changing my code as follows:
-first I made messageTimes a dict of lists keyed by the tool:
messageTimes = defaultdict(list) # a dict with sorted lists
for row in cdata: # tool, time, module, message
if self.message in row[3]:
messageTimes[row[0]].append(row[1])
-then in the determine function I used bisect:
def determine(tup):
if self.message in tup[3]: return True # matched message
times = messageTimes[tup[0]]
le = bisect.bisect_right(times, tup[1])
ge = bisect.bisect_left(times, tup[1])
return (le and tup[1]-times[le-1] <= tdiff) or (ge != len(times) and times[ge]-tup[1] <= tdiff)
With these changes the code that was taking over 2 hours took under 20 minutes, and even better, a query that was taking 40 minutes took 8 seconds!
I made 2 more changes and now that 20 minute query is taking 3 minutes:
found = defaultdict(int)
def determine(tup):
if self.message in tup[3]: return True # matched message
times = messageTimes[tup[0]]
idx = found[tup[0]]
le = bisect.bisect_right(times, tup[1], idx)
idx = le
return (le and tup[1]-times[le-1] <= tdiff) or (le != len(times) and times[le]-tup[1] <= tdiff)

App Engine Datastore IN Operator - how to use?

Reading: http://code.google.com/appengine/docs/python/datastore/gqlreference.html
I want to use:
:= IN
but am unsure how to make it work. Let's assume the following
class User(db.Model):
name = db.StringProperty()
class UniqueListOfSavedItems(db.Model):
str = db.StringPropery()
datesaved = db.DateTimeProperty()
class UserListOfSavedItems(db.Model):
name = db.ReferenceProperty(User, collection='user')
str = db.ReferenceProperty(UniqueListOfSavedItems, collection='itemlist')
How can I do a query which gets me the list of saved items for a user? Obviously I can do:
q = db.Gql("SELECT * FROM UserListOfSavedItems WHERE name :=", user[0].name)
but that gets me a list of keys. I want to now take that list and get it into a query to get the str field out of UniqueListOfSavedItems. I thought I could do:
q2 = db.Gql("SELECT * FROM UniqueListOfSavedItems WHERE := str in q")
but something's not right...any ideas? Is it (am at my day job, so can't test this now):
q2 = db.Gql("SELECT * FROM UniqueListOfSavedItems __key__ := str in q)
side note: what a devilishly difficult problem to search on because all I really care about is the "IN" operator.
Since you have a list of keys, you don't need to do a second query - you can do a batch fetch, instead. Try this:
#and this should get me the items that a user saved
useritems = db.get(saveditemkeys)
(Note you don't even need the guard clause - a db.get on 0 entities is short-circuited appropritely.)
What's the difference, you may ask? Well, a db.get takes about 20-40ms. A query, on the other hand (GQL or not) takes about 160-200ms. But wait, it gets worse! The IN operator is implemented in Python, and translates to multiple queries, which are executed serially. So if you do a query with an IN filter for 10 keys, you're doing 10 separate 160ms-ish query operations, for a total of about 1.6 seconds latency. A single db.get, in contrast, will have the same effect and take a total of about 30ms.
+1 to Adam for getting me on the right track. Based on his pointer, and doing some searching at Code Search, I have the following solution.
usersaveditems = User.Gql(“Select * from UserListOfSavedItems where user =:1”, userkey)
saveditemkeys = []
for item in usersaveditems:
#this should create a list of keys (references) to the saved item table
saveditemkeys.append(item.str())
if len(usersavedsearches > 0):
#and this should get me the items that a user saved
useritems = db.Gql(“SELECT * FROM UniqueListOfSavedItems WHERE __key__ in :1’, saveditemkeys)

Categories