String manipulation appears to be inefficient - python

I think my code is too inefficient. I'm guessing it has something to do with using strings, though I'm unsure. Here is the code:
genome = FASTAdata[1]
genomeLength = len(genome);
# Hash table holding all the k-mers we will come across
kmers = dict()
# We go through all the possible k-mers by index
for outer in range (0, genomeLength-1):
for inner in range (outer+2, outer+22):
substring = genome[outer:inner]
if substring in kmers: # if we already have this substring on record, increase its value (count of num of appearances) by 1
kmers[substring] += 1
else:
kmers[substring] = 1 # otherwise record that it's here once
This is to search through all substrings of length at most 20. Now this code seems to take pretty forever and never terminate, so something has to be wrong here. Is using [:] on strings causing the huge overhead? And if so, what can I replace it with?
And for clarity the file in question is nearly 200mb, so pretty big.

I would recommend using a dynamic programming algorithm. The problem is that for all inner strings that are not found, you are re-searching those again with extra characters appended onto them, so of course those will also not be found. I do not have a specific algorithm in mind, but this is certainly a case for dynamic programming where you remember what you have already searched for. As a really crummy example, remember all substrings of length 1,2,3,... that are not found, and never extend those bases in the next iteration where the strings are only longer.

You should use memoryview to avoid creating sub-strings as [:] will then return a "view" instead of a copy, BUT you must use Python 3.3 or higher (before that, they are not hashable).
Also, a Counter will simplify your code.
from collections import Counter
genome = memoryview("abcdefghijkrhirejtvejtijvioecjtiovjitrejabababcd".encode('ascii'))
genomeLength = len(genome)
minlen, maxlen = 2, 22
def fragments():
for start in range (0, genomeLength-minlen):
for finish in range (start+minlen, start+maxlen):
if finish <= genomeLength:
yield genome[start:finish]
count = Counter(fragments())
for (mv, n) in count.most_common(3):
print(n, mv.tobytes())
produces:
4 b'ab'
3 b'jt'
3 b'ej'
A 1,000,000 byte random array takes 45s on my laptop, but 2,000,000 causes swapping (over 8GB memory use). However, since your fragment size is small, you can easily break the problem up into million-long sub-sequences and then combine results at the end (just be careful about overlaps). That would give a total running time for a 200MB array of ~3 hours, with luck.
PS To be clear, by "combine results at the end" I assume that you only need to save the most popular for each 1M sub-sequence, by, for example, writing them to a file. You cannot keep the counter in memory - that is what is using the 8GB. That's fine if you have fragments that occur many thousands of times, but obviously won't work for smaller numbers (you might see a fragment just once in each of the 200 1M sub-sequences, and so never save it, for example). In other words, results will be lower bounds that are incomplete, particularly at lower frequencies (values are complete only if a fragment is found and recorded in every sub-sequence). If you need an exact result, this is not suitable.

Related

Python: improve performance in log writing to file

I am writing a piece of code that involves generation of new parameter values over a double FOR loop and store these values to a file. The loop iteration count can go as high as 10,000 * 100,000. I have stored the variable values in a string, which gets appended on every iteration with newer values. Finally, at the end of loop I write the complete string in a txt file.
op=open("output file path","w+")
totresult = ""
for n seconds: #this user input parameter can be upto 100,000
result = ""
for car in (cars running): #number of cars can be 10000
#Code to check if given car is in range to another car
.
.
#if car in range with another car
if distance < 1000:
result = getDetailsofOtherCar()
totresult = totalresult + carName + result
#end of loops
op.write(totresult)
op.close()
My question here is, is there a better pythonic way to perform this kind of logging. As I am guessing the string gets very bulky in the later iterations and may be causing delay in execution. Is the use of string the best possible option to store the values. Or should I consider other python data structures like list, array. I came across Logging python module but would like to get an opinion before switching to it.
I tried looking up for similar issues but found nothing similar to my current doubt.
Open to any suggestions
Thank you
Edit: code added
You can write to the file as you go e.g.
with open("output.txt", "w") as log:
for i in range(10):
for j in range(10):
log.write(str((i,j)))
Update: whether or not directly streaming the records is faster than concatenating them in a memory buffer depends crucially on how big the buffer becomes, which in turn depends on the number of records and the size of each record. On my machine this seems to kick in around 350MB.

Generate a large number of HEX

I am trying to generate all 16^16,
but there are a few problems. Mainly memory.
I tried to generate them in python like this:
for y in range (0, 16**16):
print '0x%0*X' % (16,y)
This gives me:
OverflowError: range() result has too many items
If I use sys.maxint I get a MemoryError.
To be more precise, I want to generate all combinations of HEX in length of 16, i.e:
0000000000000000
0000000000000001
0000000000000002
...
FFFFFFFFFFFFFFFF
Also, how do I calculate the approximate time it will take me to generate them?
I am open to the use of any programming language as long as I can save them to an output file.
Well... 16^16 = 1.8446744e+19, so lets say you could calculate 10 values per nanosecond (that's a 10GHz rate btw). Then it would take you 16^16 / 10 nanoseconds to compute them all, or 58.4 years. Also, if you could somehow compress each value into 1-bit (which is impossible), it would require 2 exabytes of memory to contain those values (16^16/8/2^60).
This seems like a very artificial exercise. Is it homework, or is there a reason for generating this list? It will be very long (see other answers)!
Having said that, you should ask yourself: why is this happening? The answer is that in Python 2.x, range produces an actual list. If you want to avoid that, you can:
Use Python 3.x, in which range does not actually make a list, but a special generator-like object.
Use xrange, which also doesn't actually make a list, but again produces an object.
As for timing, all of the time will be in writing to the file or screen. You can get an estimate by making a somewhat smaller list and then doing some math, but you have to be careful that it's big enough that the time is dominated by writing the lines, and not opening and closing the file.
But you should also ask yourself how big the resultant file will be... You may not like what you find. Perhaps you mean 2^16?

Repeatedly appending to a large list (Python 2.6.6)

I have a project where I am reading in ASCII values from a microcontroller through a serial port (looks like this : AA FF BA 11 43 CF etc)
The input is coming in quickly (38 two character sets / second).
I'm taking this input and appending it to a running list of all measurements.
After about 5 hours, my list has grown to ~ 855000 entries.
I'm given to understand that the larger a list becomes, the slower list operations become. My intent is to have this test run for 24 hours, which should yield around 3M results.
Is there a more efficient, faster way to append to a list then list.append()?
Thanks Everyone.
I'm given to understand that the larger a list becomes, the slower list operations become.
That's not true in general. Lists in Python are, despite the name, not linked lists but arrays. There are operations that are O(n) on arrays (copying and searching, for instance), but you don't seem to use any of these. As a rule of thumb: If it's widely used and idiomatic, some smart people went and chose a smart way to do it. list.append is a widely-used builtin (and the underlying C function is also used in other places, e.g. list comprehensions). If there was a faster way, it would already be in use.
As you will see when you inspect the source code, lists are overallocating, i.e. when they are resized, they allocate more than needed for one item so the next n items can be appended without need to another resize (which is O(n)). The growth isn't constant, it is proportional with the list size, so resizing becomes rarer as the list grows larger. Here's the snippet from listobject.c:list_resize that determines the overallocation:
/* This over-allocates proportional to the list size, making room
* for additional growth. The over-allocation is mild, but is
* enough to give linear-time amortized behavior over a long
* sequence of appends() in the presence of a poorly-performing
* system realloc().
* The growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...
*/
new_allocated = (newsize >> 3) + (newsize < 9 ? 3 : 6);
As Mark Ransom points out, older Python versions (<2.7, 3.0) have a bug that make the GC sabotage this. If you have such a Python version, you may want to disable the gc. If you can't because you generate too much garbage (that slips refcounting), you're out of luck though.
One thing you might want to consider is writing your data to a file as it's collected. I don't know (or really care) if it will affect performance, but it will help ensure that you don't lose all your data if power blips. Once you've got all the data, you can suck it out of the file and jam it in a list or an array or a numpy matrix or whatever for processing.
Appending to a python list has a constant cost. It is not affected by the number of items in the list (in theory). In practice appending to a list will get slower once you run out of memory and the system starts swapping.
http://wiki.python.org/moin/TimeComplexity
It would be helpful to understand why you actually append things into a list. What are you planning to do with the items. If you don't need all of them you could build a ring buffer, if you don't need to do computation you could write the list to a file, etc.
First of all, 38 two-character sets per second, 1 stop bit, 8 data bits, and no parity, is only 760 baud, not fast at all.
But anyway, my suggestion, if you're worried about having overly large lists/don't want to use one huge list, is just to store store a list on disk once it reaches a certain size and start a new list, repeating until you've gotten all the data, then combining all the lists into one once you're done receiving the data.
Though you may skip the sublists completely and just go with nmichaels' suggestion, writing the data to a file as you get it and using a small circular buffer to hold the received data that has not yet been written.
It might be faster to use numpy if you know how long the array is going to be and you can convert your hex codes to ints:
import numpy
a = numpy.zeros(3000000, numpy.int32)
for i in range(3000000):
a[i] = int(scanHexFromSerial(),16)
This will leave you with an array of integers (which you could convert back to hex with hex()), but depending on your application maybe that will work just as well for you.

Generating non-repeating random numbers in Python

Ok this is one of those trickier than it sounds questions so I'm turning to stack overflow because I can't think of a good answer. Here is what I want: I need Python to generate a simple a list of numbers from 0 to 1,000,000,000 in random order to be used for serial numbers (using a random number so that you can't tell how many have been assigned or do timing attacks as easily, i.e. guessing the next one that will come up). These numbers are stored in a database table (indexed) along with the information linked to them. The program generating them doesn't run forever so it can't rely on internal state.
No big deal right? Just generate a list of numbers, shove them into an array and use Python "random.shuffle(big_number_array)" and we're done. Problem is I'd like to avoid having to store a list of numbers (and thus read the file, pop one off the top, save the file and close it). I'd rather generate them on the fly. Problem is that the solutions I can think of have problems:
1) Generate a random number and then check if it has already been used. If it has been used generate a new number, check, repeat as needed until I find an unused one. Problem here is that I may get unlucky and generate a lot of used numbers before getting one that is unused. Possible fix: use a very large pool of numbers to reduce the chances of this (but then I end up with silly long numbers).
2) Generate a random number and then check if it has already been used. If it has been used add or subtract one from the number and check again, keep repeating until I hit an unused number. Problem is this is no longer a random number as I have introduced bias (eventually I will get clumps of numbers and you'd be able to predict the next number with a better chance of success).
3) Generate a random number and then check if it has already been used. If it has been used add or subtract another randomly generated random number and check again, problem is we're back to simply generating random numbers and checking as in solution 1.
4) Suck it up and generate the random list and save it, have a daemon put them into a Queue so there are numbers available (and avoid constantly opening and closing a file, batching it instead).
5) Generate much larger random numbers and hash them (i.e. using MD5) to get a smaller numeric value, we should rarely get collisions, but I end up with larger than needed numbers again.
6) Prepend or append time based information to the random number (i.e. unix timestamp) to reduce chances of a collision, again I get larger numbers than I need.
Anyone have any clever ideas that will reduce the chances of a "collision" (i.e. generating a random number that is already taken) but will also allow me to keep the number "small" (i.e. less than a billion (or a thousand million for your europeans =)).
Answer and why I accepted it:
So I will simply go with 1, and hope it's not an issue, however if it is I will go with the deterministic solution of generating all the numbers and storing them so that there is a guarentee of getting a new random number, and I can use "small" numbers (i.e. 9 digits instead of an MD5/etc.).
This is a neat problem, and I've been thinking about it for a while (with solutions similar to Sjoerd's), but in the end, here's what I think:
Use your point 1) and stop worrying.
Assuming real randomness, the probability that a random number has already been chosen before is the count of previously chosen numbers divided by the size of your pool, i.e. the maximal number.
If you say you only need a billion numbers, i.e. nine digits: Treat yourself to 3 more digits, so you have 12-digit serial numbers (that's three groups of four digits – nice and readable).
Even when you're close to having chosen a billion numbers previously, the probability that your new number is already taken is still only 0,1%.
Do step 1 and draw again. You can still check for an "infinite" loop, say don't try more than 1000 times or so, and then fallback to adding 1 (or something else).
You'll win the lottery before that fallback ever gets used.
You could use Format-Preserving Encryption to encrypt a counter. Your counter just goes from 0 upwards, and the encryption uses a key of your choice to turn it into a seemingly random value of whatever radix and width you want.
Block ciphers normally have a fixed block size of e.g. 64 or 128 bits. But Format-Preserving Encryption allows you to take a standard cipher like AES and make a smaller-width cipher, of whatever radix and width you want (e.g. radix 10, width 9 for the parameters of the question), with an algorithm which is still cryptographically robust.
It is guaranteed to never have collisions (because cryptographic algorithms create a 1:1 mapping). It is also reversible (a 2-way mapping), so you can take the resulting number and get back to the counter value you started with.
AES-FFX is one proposed standard method to achieve this.
I've experimented with some basic Python code for AES-FFX--see Python code here (but note that it doesn't fully comply with the AES-FFX specification). It can e.g. encrypt a counter to a random-looking 7-digit decimal number. E.g.:
0000000 0731134
0000001 6161064
0000002 8899846
0000003 9575678
0000004 3030773
0000005 2748859
0000006 5127539
0000007 1372978
0000008 3830458
0000009 7628602
0000010 6643859
0000011 2563651
0000012 9522955
0000013 9286113
0000014 5543492
0000015 3230955
... ...
For another example in Python, using another non-AES-FFX (I think) method, see this blog post "How to Generate an Account Number" which does FPE using a Feistel cipher. It generates numbers from 0 to 2^32-1.
With some modular arithmic and prime numbers, you can create all numbers between 0 and a big prime, out of order. If you choose your numbers carefully, the next number is hard to guess.
modulo = 87178291199 # prime
incrementor = 17180131327 # relative prime
current = 433494437 # some start value
for i in xrange(1, 100):
print current
current = (current + incrementor) % modulo
If they don't have to be random, but just not obviously linear (1, 2, 3, 4, ...), then here's a simple algorithm:
Pick two prime numbers. One of them will be the largest number you can generate, so it should be around one billion. The other should be fairly large.
max_value = 795028841
step = 360287471
previous_serial = 0
for i in xrange(0, max_value):
previous_serial += step
previous_serial %= max_value
print "Serial: %09i" % previous_serial
Just store the previous serial each time so you know where you left off. I can't prove mathmatically that this works (been too long since those particular classes), but it's demonstrably correct with smaller primes:
s = set()
with open("test.txt", "w+") as f:
previous_serial = 0
for i in xrange(0, 2711):
previous_serial += 1811
previous_serial %= 2711
assert previous_serial not in s
s.add(previous_serial)
You could also prove it empirically with 9-digit primes, it'd just take a bit more work (or a lot more memory).
This does mean that given a few serial numbers, it'd be possible to figure out what your values are--but with only nine digits, it's not likely that you're going for unguessable numbers anyway.
If you don't need something cryptographically secure, but just "sufficiently obfuscated"...
Galois Fields
You could try operations in Galois Fields, e.g. GF(2)32, to map a simple incrementing counter x to a seemingly random serial number y:
x = counter_value
y = some_galois_function(x)
Multiply by a constant
Inverse is to multiply by the reciprocal of the constant
Raise to a power: xn
Reciprocal x-1
Special case of raising to power n
It is its own inverse
Exponentiation of a primitive element: ax
Note that this doesn't have an easily-calculated inverse (discrete logarithm)
Ensure a is a primitive element, aka generator
Many of these operations have an inverse, which means, given your serial number, you can calculate the original counter value from which it was derived.
As for finding a library for Galois Field for Python... good question. If you don't need speed (which you wouldn't for this) then you could make your own. I haven't tried these:
NZMATH
Finite field Python package
Sage, although it's a whole environment for mathematical computing, much more than just a Python library
Matrix multiplication in GF(2)
Pick a suitable 32×32 invertible matrix in GF(2), and multiply a 32-bit input counter by it. This is conceptually related to LFSR, as described in S.Lott's answer.
CRC
A related possibility is to use a CRC calculation. Based on the remainder of long-division with an irreducible polynomial in GF(2). Python code is readily available for CRCs (crcmod, pycrc), although you might want to pick a different irreducible polynomial than is normally used, for your purposes. I'm a little fuzzy on the theory, but I think a 32-bit CRC should generate a unique value for every possible combination of 4-byte inputs. Check this. It's quite easy to experimentally check this, by feeding the output back into the input, and checking that it produces a complete cycle of length 232-1 (zero just maps to zero). You may need to get rid of any initial/final XORs in the CRC algorithm for this check to work.
I think you are overestimating the problems with approach 1). Unless you have hard-realtime requirements just checking by random choice terminates rather fast. The probability of needing more than a number of iterations decays exponentially. With 100M numbers outputted (10% fillfactor) you'll have one in billion chance of requiring more than 9 iterations. Even with 50% of numbers taken you'll on average need 2 iterations and have one in a billion chance of requiring more than 30 checks. Or even the extreme case where 99% of the numbers are already taken might still be reasonable - you'll average a 100 iterations and have 1 in a billion change of requiring 2062 iterations
The standard Linear Congruential random number generator's seed sequence CANNOT repeat until the full set of numbers from the starting seed value have been generated. Then it MUST repeat precisely.
The internal seed is often large (48 or 64 bits). The generated numbers are smaller (32 bits usually) because the entire set of bits are not random. If you follow the seed values they will form a distinct non-repeating sequence.
The question is essentially one of locating a good seed that generates "enough" numbers. You can pick a seed, and generate numbers until you get back to the starting seed. That's the length of the sequence. It may be millions or billions of numbers.
There are some guidelines in Knuth for picking suitable seeds that will generate very long sequences of unique numbers.
You can run 1) without running into the problem of too many wrong random numbers if you just decrease the random interval by one each time.
For this method to work, you will need to save the numbers already given (which you want to do anyway) and also save the quantity of numbers taken.
It is pretty obvious that, after having collected 10 numbers, your pool of possible random numbers will have been decreased by 10. Therefore, you must not choose a number between 1 and 1.000.000 but between 1 an 999.990. Of course this number is not the real number but only an index (unless the 10 numbers collected have been 999.991, 999.992, …); you’d have to count now from 1 omitting all the numbers already collected.
Of course, your algorithm should be smarter than just counting from 1 to 1.000.000 but I hope you understand the method.
I don’t like drawing random numbers until I get one which fits either. It just feels wrong.
My solution https://github.com/glushchenko/python-unique-id, i think you should extend matrix for 1,000,000,000 variations and have fun.
I'd rethink the problem itself... You don't seem to be doing anything sequential with the numbers... and you've got an index on the column which has them. Do they actually need to be numbers?
Consider a sha hash... you don't actually need the entire thing. Do what git or other url shortening services do, and take first 3/4/5 characters of the hash. Given that each character now has 36 possible values instead of 10, you have 2,176,782,336 combinations instead of 999,999 combinations (for six digits). Combine that with a quick check on whether the combination exists (a pure index query) and a seed like a timestamp + random number and it should do for almost any situation.
Do you need this to be cryptographically secure or just hard to guess? How bad are collisions? Because if it needs to be cryptographically strong and have zero collisions, it is, sadly, impossible.
I started trying to write an explanation of the approach used below, but just implementing it was easier and more accurate. This approach has the odd behavior that it gets faster the more numbers you've generated. But it works, and it doesn't require you to generate all the numbers in advance.
As a simple optimization, you could easily make this class use a probabilistic algorithm (generate a random number, and if it's not in the set of used numbers add it to the set and return it) at first, keep track of the collision rate, and switch over to the deterministic approach used here once the collision rate gets bad.
import random
class NonRepeatingRandom(object):
def __init__(self, maxvalue):
self.maxvalue = maxvalue
self.used = set()
def next(self):
if len(self.used) >= self.maxvalue:
raise StopIteration
r = random.randrange(0, self.maxvalue - len(self.used))
result = 0
for i in range(1, r+1):
result += 1
while result in self.used:
result += 1
self.used.add(result)
return result
def __iter__(self):
return self
def __getitem__(self):
raise NotImplemented
def get_all(self):
return [i for i in self]
>>> n = NonRepeatingRandom(20)
>>> n.get_all()
[12, 14, 13, 2, 20, 4, 15, 16, 19, 1, 8, 6, 7, 9, 5, 11, 10, 3, 18, 17]
If it is enough for you that a casual observer can't guess the next value, you can use things like a linear congruential generator or even a simple linear feedback shift register to generate the values and keep the state in the database in case you need more values. If you use these right, the values won't repeat until the end of the universe. You'll find more ideas in the list of random number generators.
If you think there might be someone who would have a serious interest to guess the next values, you can use a database sequence to count the values you generate and encrypt them with an encryption algorithm or another cryptographically strong perfect has function. However you need to take care that the encryption algorithm isn't easily breakable if one can get hold of a sequence of successive numbers you generated - a simple RSA, for instance, won't do it because of the Franklin-Reiter Related Message Attack.
Bit late answer, but I haven't seen this suggested anywhere.
Why not use the uuid module to create globally unique identifiers
To generate a list of totally random numbers within a defined threshold, as follows:
plist=list()
length_of_list=100
upbound=1000
lowbound=0
while len(pList)<(length_of_list):
pList.append(rnd.randint(lowbound,upbound))
pList=list(set(pList))
I bumped into the same problem and opened a question with a different title before getting to this one. My solution is a random sample generator of indexes (i.e. non-repeating numbers) in the interval [0,maximal), called itersample. Here are some usage examples:
import random
generator=itersample(maximal)
another_number=generator.next() # pick the next non-repeating random number
or
import random
generator=itersample(maximal)
for random_number in generator:
# do something with random_number
if some_condition: # exit loop when needed
break
itersample generates non-repeating random integers, storage need is limited to picked numbers, and the time needed to pick n numbers should be (as some tests confirm) O(n log(n)), regardelss of maximal.
Here is the code of itersample:
import random
def itersample(c): # c = upper bound of generated integers
sampled=[]
def fsb(a,b): # free spaces before middle of interval a,b
fsb.idx=a+(b+1-a)/2
fsb.last=sampled[fsb.idx]-fsb.idx if len(sampled)>0 else 0
return fsb.last
while len(sampled)<c:
sample_index=random.randrange(c-len(sampled))
a,b=0,len(sampled)-1
if fsb(a,a)>sample_index:
yielding=sample_index
sampled.insert(0,yielding)
yield yielding
elif fsb(b,b)<sample_index+1:
yielding=len(sampled)+sample_index
sampled.insert(len(sampled),yielding)
yield yielding
else: # sample_index falls inside sampled list
while a+1<b:
if fsb(a,b)<sample_index+1:
a=fsb.idx
else:
b=fsb.idx
yielding=a+1+sample_index
sampled.insert(a+1,yielding)
yield yielding
You are stating that you store the numbers in a database.
Wouldn't it then be easier to store all the numbers there, and ask the database for a random unused number?
Most databases support such a request.
Examples
MySQL:
SELECT column FROM table
ORDER BY RAND()
LIMIT 1
PostgreSQL:
SELECT column FROM table
ORDER BY RANDOM()
LIMIT 1

Fastest way to search 1GB+ a string of data for the first occurrence of a pattern in Python

There's a 1 Gigabyte string of arbitrary data which you can assume to be equivalent to something like:
1_gb_string=os.urandom(1*gigabyte)
We will be searching this string, 1_gb_string, for an infinite number of fixed width, 1 kilobyte patterns, 1_kb_pattern. Every time we search the pattern will be different. So caching opportunities are not apparent. The same 1 gigabyte string will be searched over and over. Here is a simple generator to describe what's happening:
def findit(1_gb_string):
1_kb_pattern=get_next_pattern()
yield 1_gb_string.find(1_kb_pattern)
Note that only the first occurrence of the pattern needs to be found. After that, no other major processing should be done.
What can I use that's faster than python's bultin find for matching 1KB patterns against 1GB or greater data strings?
(I am already aware of how to split up the string and searching it in parallel, so you can disregard that basic optimization.)
Update: Please bound memory requirements to 16GB.
As you clarify that long-ish preprocessing is acceptable, I'd suggest a variant of Rabin-Karp: "an algorithm of choice for multiple pattern search", as wikipedia puts it.
Define a "rolling hash" function, i.e., one such that, when you know the hash for haystack[x:x+N], computing the hash for haystack[x+1:x+N+1] is O(1). (Normal hashing functions such as Python's built-in hash do not have this property, which is why you have to write your own, otherwise the preprocessing becomes exhaustingly long rather than merely long-ish;-). A polynomial approach is fruitful, and you could use, say, 30-bit hash results (by masking if needed, i.e., you can do the computation w/more precision and just store the masked 30 bits of choice). Let's call this rolling hash function RH for clarity.
So, compute 1G of RH results as you roll along the haystack 1GB string; if you just stored these it would give you an array H of 1G 30-bit values (4GB) mapping index-in-haystack->RH value. But you want the reverse mapping, so use instead an array A of 2**30 entries (1G entries) that for each RH value gives you all the indices of interest in the haystack (indices at which that RH value occurs); for each entry you store the index of the first possibly-interesting haystack index into another array B of 1G indices into the haystack which is ordered to keep all indices into haystack with identical RH values ("collisions" in hashing terms) adjacent. H, A and B both have 1G entries of 4 bytes each, so 12GB total.
Now for each incoming 1K needle, compute its RH, call it k, and use it as an index into A; A[k] gives you the first index b into B at which it's worth comparing. So, do:
ib = A[k]
b = B[ib]
while b < len(haystack) - 1024:
if H[b] != k: return "not found"
if needle == haystack[b:b+1024]: return "found at", b
ib += 1
b = B[ib]
with a good RH you should have few collisions, so the while should execute very few times until returning one way or another. So each needle-search should be really really fast.
There are a number of string matching algorithms use in the field of genetics to find substrings. You might try this paper or this paper
As far as I know, standard find algorithm is naive algorithm with complexity about n*m comparisons, because
it checks patterns against every possible offset. There are some more effective algoithms, requiring about n+m comparisons.
If your string is not a natural language string, you can try
Knuth–Morris–Pratt algorithm . Boyer–Moore search algorithm is fast and simple enough too.
Are you willing to spend a significant time preprocessing the string?
If you are, what you can do is build a list of n-grams with offsets.
Suppose your alphabet is hex bytes and you are using 1-grams.
Then for 00-ff, you can create a dictionary that looks like this(perlese, sorry)
$offset_list{00} = #array_of_offsets
$offset_list{01} = #...etc
where you walk down the string and build the #array_of_offsets from all points where bytes happen. You can do this for arbitrary n-grams.
This provides a "start point for search" that you can use to walk.
Of course, the downside is that you have to preprocess the string, so that's your tradeoff.
edit:
The basic idea here is to match prefixes. This may bomb out badly if the information is super-similar, but if it has a fair amount of divergence between n-grams, you should be able to match prefixes pretty well.
Let's quantify divergence, since you've not discussed the kind of info you're analyzing. For the purposes of this algorithm, we can characterize divergence as a distance function: you need a decently high Hamming distance. If the hamming distance between n-grams is, say, 1, the above idea won't work. But if it's n-1, the above algorithm will be much easier.
To improve on my algorithm, let's build an algorithm that does some successive elimination of possibilities:
We can invoke Shannon Entropy to define information of a given n-gram. Take your search string and successively build a prefix based upon the first m characters. When the entropy of the m-prefix is 'sufficiently high', use it later.
Define p to be an m-prefix of the search string
Search your 1 GB string and create an array of offsets that match p.
Extend the m-prefix to be some k-prefix, k > m, entropy of k-prefix higher than m-prefix.
Keep the elements offset array defined above, such that they match the k-prefix string. Discard the non-matching elements.
Goto 4 until the entire search string is met.
In a sense, this is like reversing Huffman encoding.
With infinite memory, you can hash every 1k string along with its position in the 1 GB file.
With less than infinite memory, you will be bounded by how many memory pages you touch when searching.
I don't know definitively if the find() method for strings is faster than the search() method provided by Python's re (regular expressions) module, but there's only one way to find out.
If you're just searching a string, what you want is this:
import re
def findit(1_gb_string):
yield re.search(1_kb_pattern, 1_gb_string)
However, if you really only want the first match, you might be better off using finditer(), which returns an iterator, and with such large operations might actually be better.
http://www.youtube.com/watch?v=V5hZoJ6uK-s
Will be of most value to you. Its an MIT lecture on Dynamic Programming
If the patterns are fairly random, you can precompute the location of n-prefixes of strings.
Instead of going over all options for n-prefixes, just use the actual ones in the 1GB string - there will be less than 1Gig of those. Use as big a prefix as fits in your memory, I don't have 16GB RAM to check but a prefix of 4 could work (at least in a memory-efficient data structures), if not try 3 or even 2.
For a random 1GB string and random 1KB patterns, you should get a few 10s of locations per prefix if you use 3-byte prefixes, but 4-byte prefixes should get you an average of 0 or 1 , so lookup should be fast.
Precompute Locations
def find_all(pattern, string):
cur_loc = 0
while True:
next_loc = string.find(pattern, cur_loc)
if next_loc < 0: return
yield next_loc
cur_loc = next_loc+1
big_string = ...
CHUNK_SIZE = 1024
PREFIX_SIZE = 4
precomputed_indices = {}
for i in xrange(len(big_string)-CHUNK_SIZE):
prefix = big_string[i:i+PREFIX_SIZE]
if prefix not in precomputed_indices:
precomputed_indices[prefix] = tuple(find_all(prefix, big_string))
Look up a pattern
def find_pattern(pattern):
prefix = pattern[:PREFIX_SIZE]
# optimization - big prefixes will result in many misses
if prefix not in precomputed_indices:
return -1
for loc in precomputed_indices[prefix]:
if big_string[loc:loc+CHUNK_SIZE] == pattern:
return loc
return -1
Someone hinted at a possible way to index this thing if you have abundant RAM (or possibly even disk/swap) available.
Imagine if you performed a simple 32-bit CRC on a 1K block extending from each character in the original Gig string. This would result in 4bytes of checksum data for each byte offset from the beginning of the data.
By itself this might give a modest improvement in search speed. The checksum of each 1K search target could be checked against each CRC ... which each collision tested for a true match. That should still be a couple orders of magnitude faster than a normal linear search.
That, obviously, costs us 4GB of RAM for he CRC array (plus the original Gig for the original data and a little more overhead for the environment and our program).
If we have ~16GB we could sort the checksums and store a list of offsets where each is found. That becomes an indexed search (average of about 16 probes per target search ... worst case around 32 or 33 (might be a fence post there).
It's possible that a 16BG file index would still give better performance than a linear checksum search and it would almost certainly be better than a linear raw search (unless you have extremely slow filesystems/storage).
(Adding): I should clarify that this strategy is only beneficial given that you've described a need to do many searches on the same one gigabyte data blob.
You might use a threaded approach to building the index (while reading it as well as having multiple threads performing the checksumming). You might also offload the indexing into separate processes or a cluster of nodes (particularly if you use a file-based index --- the ~16GB option described above). With a simple 32-bit CRC you might be able to perform the checksums/indexing as fast as your reader thread can get the data (but we are talking about 1024 checksums for each 1K of data so perhaps not).
You might further improve performance by coding a Python module in C for actually performing the search ... and/or possibly for performing the checksumming/indexing.
The development and testing of such C extensions entail other trade-offs, obviously enough. It sounds like this would have near zero re-usability.
One efficient but complex way is full-text indexing with the Burrows-Wheeler transform. It involves performing a BWT on your source text, then using a small index on that to quickly find any substring in the text that matches your input pattern.
The time complexity of this algorithm is roughly O(n) with the length of the string you're matching - and independent of the length of the input string! Further, the size of the index is not much larger than the input data, and with compression can even be reduced below the size of the source text.

Categories