Tracking how many elements processed in generator - python

I have a problem in which I process documents from files using python generators. The number of files I need to process are not known in advance. Each file contain records which consumes considerable amount of memory. Due to that, generators are used to process records. Here is the summary of the code I am working on:
def process_all_records(files):
for f in files:
fd = open(f,'r')
recs = read_records(fd)
recs_p = (process_records(r) for r in recs)
write_records(recs_p)
My process_records function checks for the content of each record and only returns the records which has a specific sender. My problem is the following: I want to have a count on number of elements being returned by read_records. I have been keeping track of number of records in process_records function using a list:
def process_records(r):
if r.sender('sender_of_interest'):
records_list.append(1)
else:
records_list.append(0)
...
The problem with this approach is that records_list could grow without bounds depending upon the input. I want to be able to consume the content of records_list once it grows to certain point and then restart the process. For example, after 20 records has been processed, I want to find out how many records are from 'sender_of_interest' and how many are from other sources and empty the list. Can I do this without using a lock?

You could make your generator a class with an attribute that contains a count of the number of records it has processed. Something like this:
class RecordProcessor(object):
def __init__(self, recs):
self.recs = recs
self.processed_rec_count = 0
def __call__(self):
for r in self.recs:
if r.sender('sender_of_interest'):
self.processed_rec_count += 1
# process record r...
yield r # processed record
def process_all_records(files):
for f in files:
fd = open(f,'r')
recs_p = RecordProcessor(read_records(fd))
write_records(recs_p)
print 'records processed:', recs_p.processed_rec_count

Here's the straightforward approach. Is there some reason why something this simple won't work for you?
seen=0
matched=0
def process_records(r):
seen = seen + 1
if r.sender('sender_of_interest'):
matched = match + 1
records_list.append(1)
else:
records_list.append(0)
if seen > 1000 or someOtherTimeBasedCriteria:
print "%d of %d total records had the sender of interest" % (matched, seen)
seen = 0
matched = 0
If you have the ability to close your stream of messages and re-open them, you might want one more total seen variable, so that if you had to close that stream and re-open it later, you could go to the last record you processed and pick up there.
In this code "someOtherTimeBasedCriteria" might be a timestamp. You can get the current time in milliseconds when you begin processing, and then if the current time now is more than 20,000ms more (20 sec) then reset the seen/matched counters.

Related

Dividing line by module in multiprocessing

I want to put a huge file into small files. There are approximately 2 million IDs in file and I want sort them by module. When you run program it should ask the number of files that you want to divide the main file.(x= int(input)). And I want to seperate file by module function. I mean if ID%x == 1 it should ad this ID to q1 and to f1. But it adds only first line that true for requirements.
import multiprocessing
def createlist(x,i,queue_name):
with open("events.txt") as f:
next(f)
for line in f:
if int(line) % x == i:
queue_name.put(line)
def createfile(x,i,queue_name):
for i in range(x):
file_name = "file{}.txt".format(i+1)
with open(file_name, "w") as text:
text.write(queue_name.get())
if __name__=='__main__':
x= int(input("number of parts "))
i = 0
for i in range(x):
queue_name = "q{}".format(i+1)
queue_name = multiprocessing.Queue()
p0=multiprocessing.Process(target = createlist, args = (x,i,queue_name,))
process_name = "p{}".format(i+1)
process_name = multiprocessing.Process(target = createfile, args = (x,i,queue_name,))
p0.start()
process_name.start()
Your createfile has two functional issues.
it only reads from the queue once, then terminates
it iterates the range of the desired number of subsets a second time, hence even after fixing the issue with the single queue-read you get one written file and parts - 1 empty files.
To fix your approach make createfile look like this:
def createfile(i, queue_name): # Note: x has been removed from the args
file_name = "file{}.txt".format(i + 1)
with open(file_name, "w") as text:
while True:
if queue.empty():
break
text.write(queue.get())
Since x has been removed from createfile's arguments, you'd also remove it from the process instantiation:
process_name = multiprocessing.Process(target = createfile, args = (i,queue_name,))
However ... do not do it like this. The more subsets you want, the more processes and queues you create (two processes and one queue per subset). That is a lot of overhead you create.
Also, while having one responsible process per output file for writing might still make some sense, having multiple processes reading the same (huge) file completely does not.
I did some timing and testing with an input file containing 1000 lines, each consisting of one random integer between 0 and 9999. I created three algorithms and ran each in ten iterations while tracking execution time. I did this for a desired number of subsets of 1, 2, 3, 4, 5 and 10. For the graph below I took the mean value of each series.
orwqpp (green): Is one-reader-writer-queue-per-part. Your approach. It saw an average increase in execution time of 0.48 seconds per additional subset.
orpp (blue): Is one-reader-per-part. This one had a common writer process that took care of writing to all files. It saw an average increase in execution time of 0.25 seconds per additional subset.
ofa (yellow): Is one-for-all. One single function, not run in a separate process, reading and writing in one go. It saw an average increase in execution time of 0.0014 seconds per additional subset.
Keep in mind that these figures were created with an input file 1/2000 the size of yours. The processes in what resembles your approach completed so quickly, they barely got in each other's way. Once the input is large enough to make the processes run for a longer amount of time, contention for CPU resources will increase and so will the penalty of having more processes as more subsets are requested.
Here's the one-for-all approach:
def one_for_all(parts):
handles = dict({el: open("file{}.txt".format(el), "w") for el in range(parts)})
with open("events.txt") as f:
next(f)
for line in f:
fnum = int(line) % parts
handles.get(fnum).write(line)
[h.close() for h in handles.values()]
if __name__ == '__main__':
x = int(input("number of parts "))
one_for_all(x)
This currently names the files based on the result of the modulo operation, so numbers where int(line) % parts is 0 will be in file0.txt and so on.
if you don't want that, simply add 1 when formatting the file name:
handles = dict({el: open("file{}.txt".format(el+1), "w") for el in range(parts)})

Incorrect value from dynamodb table description and scan count

I'm having a problem with dynamodb. I'm attempting to verify the data contained within,
but scan seems to be only returning a subset of the data, here is the code I'm using with the python boto bindings
#!/usr/bin/python
#Check the scanned length of a table against the Table Description
import boto.dynamodb
#Connect
TABLENAME = "MyTableName"
sdbconn = boto.dynamodb.connect_to_region(
"eu-west-1",
aws_access_key_id='-snipped-',
aws_secret_access_key='-snipped-')
#Initial Scan
results = sdbconn.layer1.scan(TABLENAME,count=True)
previouskey = results['LastEvaluatedKey']
#Create Counting Variable
count = results['Count']
#DynamoDB scan results are limited to 1MB but return a Key value to carry on for the next MB
#so loop untill it does not return a continuation point
while previouskey != False:
results = sdbconn.layer1.scan(TABLENAME,exclusive_start_key=previouskey,count=True)
print(count)
count = count + results['Count']
try:
#get next key
previouskey = results['LastEvaluatedKey']
except:
#no key returned so thats all folks!
print(previouskey)
print("Reached End")
previouskey = False
#these presumably should match, they dont on the MyTableName Table, not even close
print(sdbconn.describe_table(TABLENAME)['Table']['ItemCount'])
print(count)
print(sdbconn.describe_table) gives me 1748175 and
print(count) gives me 583021.
I was the under the impression that these should always match? (I'm aware of the 6 hour update) only 300 rows have been added in the last 24 hours though
does anyone know if this is an issue with dynamodb? or does my code have a wrong assumption?
figured it out finally, its to do with Local Secondary Indexes, they show up in the table description as unique items, the table has two LSI's causing it to show 3x the number of items actually present

Best way to limit incoming messages to avoid duplicates

I have a system that accepts messages that contain urls, if certain keywords are in the messages, an api call is made with the url as a parameter.
In order to conserve processing and keep my end presentation efficient.
I don't want duplicate urls being submitted within a certain time range.
so if this url ---> http://instagram.com/p/gHVMxltq_8/ comes in and it's submitted to the api
url = incoming.msg['urls']
url = urlparse(url)
if url.netloc == "instagram.com":
r = requests.get("http://api.some.url/show?url=%s"% url)
and then 3 secs later the same url comes in, I don't want it submitted to the api.
What programming method might I deploy to eliminate/limit duplicate messages from being submitted to the api based on time?
UPDATE USING TIM PETERS METHOD:
limit = DecayingSet(86400)
l = limit.add(longUrl)
if l == False:
pass
else:
r = requests.get("http://api.some.url/show?url=%s"% url)
this snippet is inside a long running process, that is accepting streaming messages via tcp.
every time I pass the same url in, l returns True every time.
But when I try it in the interpreter everything is good, it returns False when the set time hasn't expired.
Does it have to do with the fact that the script is running, while the set is being added to?
Instance issues?
Maybe overkill, but I like creating a new class for this kind of thing. You never know when requirements will get fancier ;-) For example,
from time import time
class DecayingSet:
def __init__(self, timeout): # timeout in seconds
from collections import deque
self.timeout = timeout
self.d = deque()
self.present = set()
def add(self, thing):
# Return True if `thing` not already in set,
# else return False.
result = thing not in self.present
if result:
self.present.add(thing)
self.d.append((time(), thing))
self.clean()
return result
def clean(self):
# forget stuff added >= `timeout` seconds ago
now = time()
d = self.d
while d and now - d[0][0] >= self.timeout:
_, thing = d.popleft()
self.present.remove(thing)
As written, it checks for expirations whenever an attempt is made to add a new thing. Maybe that's not what you want, but it should be a cheap check since the deque holds items in order of addition, so gets out at once if no items are expiring. Lots of possibilities.
Why a deque? Because deque.popleft() is a lot faster than list.pop(0) when the number of items becomes non-trivial.
suppose your desired interval is 1 hour, keep 2 counters that increment every hour but they are offset 30 minutes from each other. i. e. counter A goes 1, 2, 3, 4 at 11:17, 12:17, 13:17, 14:17 and counter B goes 1, 2, 3, 4 at 11:47, 12:47, 13:47, 14:47.
now if a link comes in and has either of the two counters same as an earlier link, then consider it to be duplicate.
the benefit of this scheme over explicit timestamps is that you can hash the url+counterA and url+counterB to quickly check whether the url exists
Update: You need two data stores: one, a regular database table (slow) with columns: (url, counterA, counterB) and two, a chunk of n bits of memory (fast). given a url so.com, counterA 17 and counterB 18, first hash "17,so.com" into a range 0 to n - 1 and see if the bit at that address is turned on. similarly, hash "18,so.com" and see if the bit is turned on.
If the bit is not turned on in either case you are sure it is a fresh URL within an hour, so we are done (quickly).
If the bit is turned on in either case then look up the url in the database table to check if it was that url indeed or some other URL that hashed to the same bit.
Further update: Bloom filters are an extension of this scheme.
I'd recommend keeping an in-memory cache of the most-recently-used URLs. Something like a dictionary:
urls = {}
and then for each URL:
if url in urls and (time.time() - urls[url]) < SOME_TIMEOUT:
# Don't submit the data
else:
urls[url] = time.time()
# Submit the data

Better way to control flow around database query

In my program I need to read a very large table (it exceeds memory storage) and have myself writing the following construct to read from the table and do some work. While I know it's very possible to re-write the select into an iterator style it still has the basic structure that is follows:
found = True
start = 0
batch = 2500
while found:
found = False
for obj in Channel.select().limit(batch).offset(start):
found = True
# do some big work...
start += batch
What I would like to do is have something that don't carry around as many klunky state variables. Ideas of how to clean up this bit of mess?
FYI - I've tried this as well, not sure I like it any better:
#staticmethod
def qiter(q, start=0, batch=25000):
obj = True
while obj:
for obj in q.limit(batch).offset(start):
yield obj
start += batch
The shortest thing I found is the following:
for start in itertools.count(0, 2500):
objs = Channel.select().limit(2500).offset(start)
if not objs:
break
for obj in objs:
# do some big work...
Basically it's a combination of two things:
the count iterator (from the itertools package of the standard library) reduces the batch counting to a minimum, and
using a separate test and the break statement to get out of it.
In detail:
The count iterator is pretty simple: it yield the infinite series 0, 2500, 5000, 7500, ... . As this loop would never end, we need to break out of it somewhere. This is where the if-statement comes into play. If objs is an empty list, the break exists the outer loop.
if you are just iterating and don't want to use up all your RAM you might check out the "iterator()" method on QueryResultWrapper.
For instance, say you need to iterate over 1,000,000 rows of data and serialize it:
# let's assume we've got 1M stat objects to dump to csv
stats_qr = Stat.select().execute()
# our imaginary serializer class
serializer = CSVSerializer()
# loop over all the stats and serialize
for stat in stats_qr.iterator():
serializer.serialize_object(stat)
Below is a link to the documentation:
http://peewee.readthedocs.org/en/latest/peewee/cookbook.html#iterating-over-lots-of-rows

How can I make my code more efficient?

I have a list of tuples that contains a tool_id, a time, and a message. I want to select from this list all the elements where the message matches some string, and all the other elements where the time is within some diff of any matching message for that tool.
Here is how I am currently doing this:
# record time for each message matching the specified message for each tool
messageTimes = {}
for row in cdata: # tool, time, message
if self.message in row[2]:
messageTimes[row[0], row[1]] = 1
# now pull out each message that is within the time diff for each matched message
# as well as the matched messages themselves
def determine(tup):
if self.message in tup[2]: return True # matched message
for (tool, date_time) in messageTimes:
if tool == tup[0]:
if abs(date_time-tup[1]) <= tdiff:
return True
return False
cdata[:] = [tup for tup in cdata if determine(tup)]
This code works, but it takes way too long to run - e.g. when cdata has 600,000 elements (which is typical for my app) it takes 2 hours for this to run.
This data came from a database. Originally I was getting just the data I wanted using SQL, but that was taking too long also. I was selecting just the messages I wanted, then for each one of those doing another query to get the data within the time diff of each. That was resulting in tens of thousands of queries. So I changed it to pull all the potential matches at once and then process it in python, thinking that would be faster. Maybe I was wrong.
Can anyone give me some suggestions on speeding this up?
Updating my post to show what I did in SQL as was suggested.
What I did in SQL was pretty straightforward. The first query was something like:
SELECT tool, date_time, message
FROM event_log
WHERE message LIKE '%foo%'
AND other selection criteria
That was fast enough, but it may return 20 or 30 thousand rows. So then I looped through the result set, and for each row ran a query like this (where dt and t are the date_time and tool from a row from the above select):
SELECT date_time, message
FROM event_log
WHERE tool = t
AND ABS(TIMESTAMPDIFF(SECOND, date_time, dt)) <= timediff
That was taking about an hour.
I also tried doing in one nested query where the inner query selected the rows from my first query, and the outer query selected the time diff rows. That took even longer.
So now I am selecting without the message LIKE '%foo%' clause and I am getting back 600,000 rows and trying to pull out the rows I want from python.
The way to optimize the SQL is to do it all in one query, instead of iterating over 20K rows and doing another query for each one.
Usually this means you need to add a JOIN, or occasionally a sub-query. And yes, you can JOIN a table to itself, as long as you rename one or both copies. So, something like this:
SELECT el2.date_time, el2.message
FROM event_log as el1 JOIN event_log as el2
WHERE el1.message LIKE '%foo%'
AND other selection criteria
AND el2.tool = el1.tool
AND ABS(TIMESTAMPDIFF(SECOND, el2.datetime, el1.datetime)) <= el1.timediff
Now, this probably won't be fast enough out of the box, so there are two steps to improve it.
First, look for any columns that obviously need to be indexed. Clearly tool and datetime need simple indices. message may benefit from either a simple index or, if your database has something fancier, maybe something fancier, but given that the initial query was fast enough, you probably don't need to worry about it.
Occasionally, that's sufficient. But usually, you can't guess everything correctly. And there may also be a need to rearrange the order of the queries, etc. So you're going to want to EXPLAIN the query, and look through the steps the DB engine is taking, and see where it's doing a slow iterative lookup when it could be doing a fast index lookup, or where it's iterating over a large collection before a small collection.
For tabular data, you can't go past the Python pandas library, which contains highly optimised code for queries like this.
I fixed this by changing my code as follows:
-first I made messageTimes a dict of lists keyed by the tool:
messageTimes = defaultdict(list) # a dict with sorted lists
for row in cdata: # tool, time, module, message
if self.message in row[3]:
messageTimes[row[0]].append(row[1])
-then in the determine function I used bisect:
def determine(tup):
if self.message in tup[3]: return True # matched message
times = messageTimes[tup[0]]
le = bisect.bisect_right(times, tup[1])
ge = bisect.bisect_left(times, tup[1])
return (le and tup[1]-times[le-1] <= tdiff) or (ge != len(times) and times[ge]-tup[1] <= tdiff)
With these changes the code that was taking over 2 hours took under 20 minutes, and even better, a query that was taking 40 minutes took 8 seconds!
I made 2 more changes and now that 20 minute query is taking 3 minutes:
found = defaultdict(int)
def determine(tup):
if self.message in tup[3]: return True # matched message
times = messageTimes[tup[0]]
idx = found[tup[0]]
le = bisect.bisect_right(times, tup[1], idx)
idx = le
return (le and tup[1]-times[le-1] <= tdiff) or (le != len(times) and times[le]-tup[1] <= tdiff)

Categories