How to make this end trimming python script faster? - python

Any suggestions to make this script run faster? I usually have more than two to ten million lines for this script.
while True:
line=f.readline()
if not line:break
linker='CTGTAGGCACCATCAAT'
line_1=line[:-1]
for i in range(len(linker)):
if line_1[-i:]==linker[:i]:
new_line='>\n'+line_1[:-i]+'\n'
seq_1.append(new_line_1)
if seq_2.count(line)==0:
seq_2.append(line)
else:pass
else:pass

First of all it seems that you are creating lots of string objects in the inner loop. You could try to build list of prefixes first:
linker = 'CTGTAGGCACCATCAAT'
prefixes = []
for i in range(len(linker)):
prefixes.append(linker[:i])
Additionally you could use string's method endswith instead of creating new object in the condition in the inner loop:
while True:
line=f.readilne()
if not line:
break
for prefix in prefixes:
if line_1.endswith(prefix):
new_line='>\n%s\n' % line_1[:-len(prefix)]
seq_1.append(new_line_1)
if seq_2.count(line)==0:
seq_2.append(line)
I am not sure about indexes there (like len(prefix)). Also don't know how much faster it could be.

I am not sure what your code is meant to do, but general approach is:
Avoid unnecessary operations, conditions etc.
Move everything you can out of the loop.
Try to do as few levels of loop as possible.
Use common Python practices where possible (they are generally more efficient).
But most important: try to simplify and optimize the algorithm, not necessarily the code implementing it.
Judging from the code and applying some of the above rules code may look like this:
seq_2 = set() # seq_2 is a set now (maybe seq_1 should be also?)
linker = 'CTGTAGGCACCATCAAT' # assuming same linker for every line
linker_range = range(len(linker)) # result of the above assumption
for line in f:
line_1=line[:-1]
for i in linker_range:
if line_1[-i:] == linker[:i]:
# deleted new_line_1
seq_1.append('>\n' + line_1[:-i] + '\n') # do you need this line?
seq_2.add(line) # will ignore if already in set

Probably a large part of the problem is the seq_2.count(line) == 0 test for whether line is in seq_2. This will walk over each element of seq_2 and test whether it's equal to line -- which will take longer and longer as seq_2 grows. You should use a set instead, which will give you constant-time tests for whether it's present through hashing. This will throw away the order of seq_2 -- if you need to keep the order, you could use both a set and a list (test if it's in the set and if not, add to both).
This probably doesn't affect the speed, but it's much nicer to loop for line in f instead of your while True loop with line = f.readline() and the test for when to break. Also, the else: pass statements are completely unnecessary and could be removed.
The definition of linker should be moved outside the loop, since it doesn't get changed. #uhz's suggestion about pre-building prefixes and using endswith are also probably good.

About twise faster than all theese variants (at least at python 2.7.2)
seq_2 = set()
# Here I use generator. So I escape .append lookup and list resizing
def F(f):
# local memory
local_seq_2 = set()
# lookup escaping
local_seq_2_add = local_seq_2.add
# static variables
linker ='CTGTAGGCACCATCAAT'
linker_range = range(len(linker))
for line in f:
line_1=line[:-1]
for i in linker_range:
if line_1[-i:] == linker[:i]:
local_seq_2_add(line)
yield '>\n' + line_1[:-i] + '\n'
# push local memory to the global
global seq_2
seq_2 = local_seq_2
# here we consume all data
seq_1 = tuple(F(f))
Yes, it's ugly and non-pythonic, but it is the fastest way to do the job.
You can also upgrade this code with with open('file.name') as f: inside generator or add some other logic.
Note:
This place '>\n' + line_1[:-i] + '\n' - is doubtful. On some machines it is the fastest way to concat strings. On some machines the fastest way is '>\n'%s'\n'%line_1[:-i] or ''.join(('>\n',line_1[:-i],'\n')) (in version without lookup, of course). I dont know what will be best for you.
It is strange, but new formatter '{}'.format(..) on my computer shows the slowest result.

Related

str.split() in the for-loop instantiation, does it cause slower execution?

I'm a sucker for reducing code to its bare minimum and love keeping it short and slim, but occasionally I get into the dilemma of whether I'm doing more harm than good. Below is an example of a situation I frequently encounter and where I start pondering if I am minifying at the expense of speed.
str = "my name is john"
##Alternative 1
for el in str.split(" "):
print(el)
##Alternative 2
splittedStr = str.split(" ")
for el in splittedStr:
print(el)
What is faster? I'd assume it's the second one because we don't split the string after every iteration (not even sure we do that)?
str.split(" ") does the exact same thing in both cases. It creates an anonymous list of the split strings. In the second case you have the minor overhead of assigning it to a variable and then fetching the value of the variable. Its wasted time if you don't need to keep the object for other reasons. But this is a trivial amount of time compared to other object referencing taking place in the same loop. Alternative 2 also leaves the data in memory which is another small performance issue.
The real reason Alternative 1 is better than 2, IMHO, is that it doesn't leave the hint that splittedStr is going to be needed later.
Look my friend, if you want to actually reduce the amount of time in the code in general,loop on a tuple instead of list but assigning the result in a variable then using the variable is not the best approach is you just reserved a memory location just to store the value but sometimes you can do that just for the sake of having a clean code like if you have more than one operation in one line like
min(str.split(mylist)[3:10])
In this case, it is better to have a variable called min_value for example just to make things cleaner.
returning back to the performance issue, you could actually notice the difference in performance if you loop through a list or a tuple like
This is looping through a tuple
for i in (1,2,3):
print(i)
& This is looping through a list
for i in [1,2,3]:
print(i)
you will find that using tuple will be faster !

Saving Memory while using large lists

EDIT:
I need help to turn the code below, especially the list, into a generator so that I can save memory on my computer.
I converted doclist into an iterable object, and deleted slist and seuslist, which previously was a large list of names.
https://www.jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/
seuslist1 = open('/Users/AJ/Desktop/Dropbox/DOS_Python/docs/US/socialentrepreneurship_US_list.txt', mode= 'r+')
seuslist = seuslist1.read()
slist = seuslist.split('\n')
slist = slist[:len(slist)-1] #I have to take out the last entry because of a weird space. Also explore using OSwalk later.
#I switched to just using a list of docs because it's easier to deal with than a dictionary
doclist = []
for i, doc in enumerate(slist):
string = 'docs/US/', doc
string = ''.join(string)
doclist.append(open(string, mode='r+').read())
#clear these variables to free up memory. Turn doclist into an generator object to save memory.
doclist = iter(doclist)
del seuslist
del slist
seuslist1.close()
Your basic problem, as you've noted, is that you're keeping all the contents of all those files in a single enormous list. Luckily, turning that list into a generator is quite simple. To keep things readable and Pythonic, we'll rename doclist to simply docs, since it's no longer a list.
# Use a generator expression to quickly create a generator.
# This will iterate over ever entry in slist.
# For each entry: build the path, open the file, read it, and yield the contents
docs = (open(path).read() for path in ('docs/US/'+entry for entry in slist))
for doc in docs:
print(len(doc)) # Do something useful here.
A couple of things to bear in mind when using generators like this.
First, it will help you with your memory problems, because you only ever have one file's contents in memory at once (unless you store it elsewhere, but that's probably a bad idea, because of the aforementioned memory issues).
Second, each file is loaded only when the iteration (for doc in docs) progresses to the next step. This means that if your process takes a long time on each iteration (or even if it doesn't), you could modify files while the process is running, for better or for worse.
Third, the generator expression here isn't the most robust thing ever, since you've got those bare open calls, any one of which could throw an Exception and kill the remainder of your processing. To make it sturdier, you'd want to write an actual generator function like in Calpratt's answer, so you can use context managers, wrap up Exceptions on a per-file basis, and so on.
Finally, remember that a generator may only be used once as-is! Once you exhaust it, it's done. This is usually fine, but you need to make sure you extract all the information you'll need the first time through (besides, you don't want to be re-reading all those files over and over anyway!).
Try something like:
main_file = '/Users/AJ/Desktop/Dropbox/DOS_Python/docs/US/socialentrepreneurship_US_list.txt'
def data_from_file_generator():
with open(main_file, mode= 'r+') as path_file:
for my_path in path_file:
with open("docs/US/" + my_path, mode='r+') as data_file:
yield data_file.read()

list membership test or set

Is it more efficient to check if an item is already in a list before adding it:
for word in open('book.txt','r').read().split():
if word in list:
pass
else:
list.append(item)
or to add everything then run set() on it? like this:
for word in open('book.txt','r').read().split():
list.append(word)
list = set(list)
If the ultimate intention is to construct a set, construct it directly and don't bother with the list:
words = set(open('book.txt','r').read().split())
This will be simple and efficient.
Just as your original code, this has the downside of first reading the entire file into memory. If that's an issue, this can be solved by reading one line at a time:
words = set(word for line in open('book.txt', 'r') for word in line.split())
(Thanks #Steve Jessop for the suggestion.)
Definitely don't take the first approach in your question, unless you know the list to be short, as it will need to scan the entire list on every single word.
A set is a hash table while a list is an array. set membership tests are O(1) while list membership tests are O(n). If anything, you should be filtering the list using a set, not filtering a set using a list.
It's worth testing to find out; but I frequently use comprehensions to filter my lists, and I find that works well; particularly if the code is experimental and subject to change.
l = list( open( 'book.txt', 'r').read().split() )
unique_l = list(set( l ))
# maybe something else:
good_l = [ word for word in l if not word in naughty_words ]
I have heard that this helps with efficiency; but as I said, a test tells more.
The algorithm with word in list is an expensive operation. Why? Because, to see if an item is in the list, you have to check every item in the list. Every time. It's a Shlemiel the painter algorithm. Every lookup is O(n), and you do it n times. There's no startup cost, but it gets expensive very quickly. And you end up looking at each item way more than one time - on average, len(list)/2 times.
Looking to see if things are in the set, is (usually) MUCH cheaper. Items are hashed, so you calculate the hash, look there, and if it's not there, it's not in the set - O(1). You do have to create the set the first time, so you'll look at every item once. Then you look at each item one more time to see if it's already in your set. Still overall O(n).
So, doing list(set(mylist)) is definitely preferable to your first solution.
#NPE's answer doesn't close the file explicitly. It's better to use a context manager
with open('book.txt','r') as fin:
words = set(fin.read().split())
For normal text files this is probably adequate. If it's an entire DNA sequence for example you probably don't want to read the entire file into memory at once.

faster membership testing in python than set()

I have to check presence of millions of elements (20-30 letters str) in the list containing 10-100k of those elements. Is there faster way of doing that in python than set() ?
import sys
#load ids
ids = set( x.strip() for x in open(idfile) )
for line in sys.stdin:
id=line.strip()
if id in ids:
#print fastq
print id
#update ids
ids.remove( id )
set is as fast as it gets.
However, if you rewrite your code to create the set once, and not change it, you can use the frozenset built-in type. It's exactly the same except immutable.
If you're still having speed problems, you need to speed your program up in other ways, such as by using PyPy instead of cPython.
As I noted in my comment, what's probably slowing you down is that you're sequentially checking each line from sys.stdin for membership of your 'master' set. This is going to be really, really slow, and doesn't allow you to make use of the speed of set operations. As an example:
#!/usr/bin/env python
import random
# create two million-element sets of random numbers
a = set(random.sample(xrange(10000000),1000000))
b = set(random.sample(xrange(10000000),1000000))
# a intersection b
c = a & b
# a difference c
d = list(a - c)
print "set d is all remaining elements in a not common to a intersection b"
print "length of d is %s" % len(d)
The above runs in ~6 wallclock seconds on my five year-old machine, and it's testing for membership in larger sets than you require (unless I've misunderstood you). Most of that time is actually taken up creating the sets, so you won't even have that overhead. The fact that the strings you refer to are long isn't relevant here; creating a set creates a hash table, as agf explained. I suspect (though again, it's not clear from your question) that if you can get all your input data into a set before you do any membership testing, it'll be a lot faster, as opposed to reading it in one item at a time, then checking for set membership
You should try to split your data to make the search faster. The tree strcuture would allow you to find very quickly if the data is present or not.
For example, start with a simple map that links the first letter with all the keys starting with that letter, thus you don't have to search all the keys but only a smaller part of them.
This would look like :
ids = {}
for id in open(idfile):
ids.setdefault(id[0], set()).add(id)
for line in sys.stdin:
id=line.strip()
if id in ids.get(id[0], set()):
#print fastq
print id
#update ids
ids[id[0]].remove( id )
Creation will be a bit slower but search should be much faster (I would expect 20 times faster, if the fisrt character of your keys is well distributed and not always the same).
This is a first step, you could do the same thing with the second character and so on, search would then just be walking the tree with each letter...
As mentioned by urschrei, you should "vectorize" the check.
It is faster to check for the presence of a million elements once (as that is done in C) than to do the check for one element a million times.

Python: Nested for loops or "next" statement

I'm a rookie hobbyist and I nest for loops when I write python, like so:
dict = {
key1: {subkey/value1: value2}
...
keyn: {subkeyn/valuen: valuen+1}
}
for key in dict:
for subkey/value in key:
do it to it
I'm aware of a "next" keyword that would accomplish the same goal in one line (I asked a question about how to use it but I didn't quite understand it).
So to me, a nested for loop is much more readable. Why, then do people use "next"? I read somewhere that Python is a dynamically-typed and interpreted language and because + both concontinates strings and sums numbers, that it must check variable types for each loop iteration in order to know what the operators are, etc. Does using "next" prevent this in some way, speeding up the execution or is it just a matter of style/preference?
next is precious to advance an iterator when necessary, without that advancement controlling an explicit for loop. For example, if you want "the first item in S that's greater than 100", next(x for x in S if x > 100) will give it to you, no muss, no fuss, no unneeded work (as everything terminates as soon as a suitable x is located) -- and you get an exception (StopIteration) if unexpectedly no x matches the condition. If a no-match is expected and you want None in that case, next((x for x in S if x > 100), None) will deliver that. For this specific purpose, it might be clearer to you if next was actually named first, but that would betray its much more general use.
Consider, for example, the task of merging multiple sequences (e.g., a union or intersection of sorted sequences -- say, sorted files, where the items are lines). Again, next is just what the doctor ordered, because none of the sequences can dominate over the others by controlling A "main for loop". So, assuming for simplicity no duplicates can exist (a condition that's not hard to relax if needed), you keep pairs (currentitem, itsfile) in a list controlled by heapq, and the merging becomes easy... but only thanks to the magic of next to advance the correct file once its item has been used, and that file only.
import heapq
def merge(*theopentextfiles):
theheap = []
for afile in theopentextfiles:
theitem = next(afile, '')
if theitem: theheap.append((theitem, afile))
heapq.heapify(theheap)
while theheap:
theitem, afile = heapq.heappop(theheap)
yielf theitem
theitem = next(afile, '')
if theitem: heapq.heappush(theheap, (theitem, afile))
Just try to do anything anywhere this elegant without next...!-)
One could go on for a long time, but the two use cases "advance an iterator by one place (without letting it control a whole for loop)" and "get just the first item from an iterator" account for most important uses of next.

Categories