I've got a file containing several channels of data. The file is sampled at a base rate, and each channel is sampled at that base rate divided by some number -- it seems to always be a power of 2, though I don't think that's important.
So, if I have channels a, b, and c, sampled at divders of 1, 2, and 4, my stream will look like:
a0 b0 c0 a1 a2 b1 a3 a4 b2 c1 a5 ...
For added fun, the channels can independently be floats or ints (though I know for each one), and the data stream does not necessarily end on a power of 2: the example stream would be valid without further extension. The values are sometimes big and sometimes little-endian, though I know what I'm dealing with up-front.
I've got code that properly unpacks these and fills numpy arrays with the correct values, but it's slow: it looks something like (hope I'm not glossing over too much; just giving an idea of the algorithm):
for sample_num in range(total_samples):
channels_to_sample = [ch for ch in all_channels if ch.samples_for(sample_num)]
format_str = ... # build format string from channels_to_sample
data = struct.unpack( my_file.read( ... ) ) # read and unpack the data
# iterate over data tuple and put values in channels_to_sample
for val, ch in zip(data, channels_to_sample):
ch.data[sample_num / ch.divider] = val
And it's slow -- a few seconds to read a 20MB file on my laptop. Profiler tells me I'm spending a bunch of time in Channel#samples_for() -- which makes sense; there's a bit of conditional logic there.
My brain feels like there's a way to do this in one fell swoop instead of nesting loops -- maybe using indexing tricks to read the bytes I want into each array? The idea of building one massive, insane format string also seems like a questionable road to go down.
Update
Thanks to those who responded. For what it's worth, the numpy indexing trick reduced the time required to read my test data from about 10 second to about 0.2 seconds, for a speedup of 50x.
The best way to really improve the performance is to get rid of the Python loop over all samples and let NumPy do this loop in compiled C code. This is a bit tricky to achieve, but it is possible.
First, you need a bit of preparation. As pointed out by Justin Peel, the pattern in which the samples are arranged repeats after some number of steps. If d_1, ..., d_k are the divisors for your k data streams and b_1, ..., b_k are the sample sizes of the streams in bytes, and lcm is the least common multiple of these divisors, then
N = lcm*sum(b_1/d_1+...+b_k/d_k)
will be the number of bytes which the pattern of streams will repeat after. If you have figured out which stream each of the first N bytes belongs to, you can simply repeat this pattern.
You can now build the array of stream indices for the first N bytes by something similar to
stream_index = []
for sample_num in range(lcm):
stream_index += [i for i, ch in enumerate(all_channels)
if ch.samples_for(sample_num)]
repeat_count = [b[i] for i in stream_index]
stream_index = numpy.array(stream_index).repeat(repeat_count)
Here, d is the sequence d_1, ..., d_k and b is the sequence b_1, ..., b_k.
Now you can do
data = numpy.fromfile(my_file, dtype=numpy.uint8).reshape(-1, N)
streams = [data[:,stream_index == i].ravel() for i in range(k)]
You possibly need to pad the data a bit at the end to make the reshape() work.
Now you have all the bytes belonging to each stream in separate NumPy arrays. You can reinterpret the data by simply assigning to the dtype attribute of each stream. If you want the first stream to be intepreted as big endian integers, simply write
streams[0].dtype = ">i"
This won't change the data in the array in any way, just the way it is interpreted.
This may look a bit cryptic, but should be much better performance-wise.
Replace channel.samples_for(sample_num) with a iter_channels(channels_config) iterator that keeps some internal state and lets you read the file in one pass. Use it like this:
for (chan, sample_data) in izip(iter_channels(), data):
decoded_data = chan.decode(sample_data)
To implement the iterator, think of a base clock with a period of one. The periods of the various channels are integers. Iterate the channels in order, and emit a channel if the clock modulo its period is zero.
for i in itertools.count():
for chan in channels:
if i % chan.period == 0:
yield chan
The grouper() recipe along with itertools.izip() should be of some help here.
Related
I am working with strings that are recursively concatenated to lengths of around 80 million characters. Python slows down dramatically as the string length increases.
Consider the following loop:
s = ''
for n in range(0,r):
s += 't'
I measure the time it takes to run to be 86ms for r = 800,000, 3.11 seconds for r = 8,000,000 and 222 seconds for r = 80,000,000
I am guessing this has something to do with how python allocates additional memory for the string. Is there a way to speed this up, such as allocating the full 80MB to the string s when it is declared?
When you have a string value and change it in your program, then the previous value will remain in a part of memory and the changed string will be placed in a new part of RAM.
As a result, the old values in RAM remain unused.
For this purpose, Garbage Collector is used and cleans your RAM from old, unused values But it will take time.
You can do this yourself. You can use the gc module to see different generations of your objects See this:
import gc
print(gc.get_threshold())
result:
(596, 2, 1)
In this example, we have 596 objects in our youngest generation, two objects in the next generation, and one object in the oldest generation.
For this reason, the allocation speed may be slow and your program may slow down
use this link to efficient String Concatenation in Python
Good luck.
It can't be done in a straightforward way with text (string) objects, but it it is trivial if you are dealing with bytes - in that case, you can create a bytearray object larger than your final outcome and insert your values into it.
If you need the final object as text, you can then decode it to text, the single step will be fast enough.
As you don't state what is the nature of your data, it may become harder - if no single-byte encoding can cover all the characters you need, you have to resort to a variable-lenght encoding, such as utf-8, or a multibyte encoding, such as utf-16 or 32. In both cases, it is no problem if you keep proper track of your insetion index - which will be also your final datasize for re-encoding. (If all you are using are genetic "GATACA" strings, just use ASCII encoding, and you are done)
data = bytearray(100_000_000) # 100 million positions -
index = 0
for character in input_data:
v = character.encode("utf-8")
s = len(v)
if s == 1:
data[index] = v
else:
data[index: index + len(v)] = v
index += len(v)
data_as_text = data[:index].decode("utf-8")
I have a large NumPy array nodes = np.arange(100_000_000) and I need to rearrange this array by:
Recording and then removing the middle value in the array
Split the array into the left half and right half
Repeat Steps 1-2 for each half
Stop when all values are exhausted
So, for a smaller input example nodes = np.arange(10), the output would be:
[5 2 8 1 4 7 9 0 3 6]
This was accomplished by naively doing:
import numpy as np
def split(node, out):
mid = len(node) // 2
out.append(node[mid])
return node[:mid], node[mid+1:]
def reorder(a):
nodes = [a.tolist()]
out = []
while nodes:
tmp = []
for node in nodes:
for n in split(node, out):
if n:
tmp.append(n)
nodes = tmp
return np.array(out)
if __name__ == "__main__":
nodes = np.arange(10)
print(reorder(nodes))
However, this is way too slow for nodes = np.arange(100_000_000) and so I am looking for a much faster solution.
You can vectorize your function with Numpy by working on groups of slices.
Here is an implementation:
# Similar to [e for tmp in zip(a, b) for e in tmp] ,
# but on Numpy arrays and much faster
def interleave(a, b):
assert len(a) == len(b)
return np.column_stack((a, b)).reshape(len(a) * 2)
# n is the length of the input range (len(a) in your example)
def fast_reorder(n):
if n == 0:
return np.empty(0, dtype=np.int32)
startSlices = np.array([0], dtype=np.int32)
endSlices = np.array([n], dtype=np.int32)
allMidSlices = np.empty(n, dtype=np.int32) # Similar to "out" in your implementation
midInsertCount = 0 # Actual size of allMidSlices
# Generate a bunch of middle values as long as there is valid slices to split
while midInsertCount < n:
# Generate the new mid/left/right slices
midSlices = (endSlices + startSlices) // 2
# Computing the next slices is not needed for the last step
if midInsertCount + len(midSlices) < n:
# Generate the nexts slices (possibly with invalid ones)
newStartSlices = interleave(startSlices, midSlices+1)
newEndSlices = interleave(midSlices, endSlices)
# Discard invalid slices
isValidSlices = newStartSlices < newEndSlices
startSlices = newStartSlices[isValidSlices]
endSlices = newEndSlices[isValidSlices]
# Fast appending
allMidSlices[midInsertCount:midInsertCount+len(midSlices)] = midSlices
midInsertCount += len(midSlices)
return allMidSlices[0:midInsertCount]
On my machine, this is 89 times faster than your scalar implementation with the input np.arange(100_000_000) dropping from 2min35 to 1.75s. It also consume far less memory (rougthly 3~4 times less). Note that if you want a faster code, then you probably need to use a native language like C or C++.
Edit:
The question has been updated to have a much smaller input array so I leave the below for historical reasons. Basically it was likely a typo but we often get accustomed to computers working with insanely large numbers and when memory is involved they can be a real problem.
There is already a numpy based solution submitted by someone else that I think fits the bill.
Your code requires an insane amount of RAM just to hold 100 billion 64 bit integers. Do you have 800GB of RAM? Then you convert the numpy array to a list which will be substantially larger than the array (each packed 64 bit int in the numpy array will become a much less memory efficient python int object and the list will have a pointer to that object). Then you make a lot of slices of the list which will not duplicate the data but will duplicate the pointers to the data and use even more RAM. You also append all the result values to a list a single value at a time. Lists are very fast for adding items generally but with such an extreme size this will not only be slow but the way the list is allocated is likely to be extremely wasteful RAM wise and contribute to major problems (I believe they double in size when they get to a certain level of fullness so you will end up allocating more RAM than you need and doing many allocations and likely copies). What kind of machine are you running this on? There are ways to improve your code but unless you're running it on a super computer I don't know that you're going to ever finish that calculation. I only..only? have 32GB of RAM and I'm not going to even try to create a 100B int_64 numpy array as I don't want to use up ssd write life for a mass of virtual memory.
As for improving your code stick to numpy arrays don't change to a python list it will greatly increase the RAM you need. Preallocate a numpy array to put the answer in. Then you need a new algorithm. Anything recursive or recursive like (ie a loop splitting the input,) will require tracking a lot of state, your nodes list is going to be extraordinarily gigantic and again use a lot of RAM. You could use len(a) to indicate values that are removed from your list and scan through the entire array each time to figure out what to do next but that will save RAM in favour of a tremendous amount of searching a gigantic array. I feel like there is an algorithm to cut numbers from each end and place them in the output and just track the beginning and end but I haven't figured it out at least not yet.
I also think there is a simpler algorithm where you just track the number of splits you've done instead of making a giant list of slices and keeping it all in memory. Take the middle of the left half and then the middle of the right then count up one and when you take the middle of the left half's left half you know you have to jump to the right half then the count is one so you jump over to the original right half's left half and on and on... Based on the depth into the halves and the length of the input you should be able to jump around without scanning or tracking all of those slices though I haven't been able to dedicate much time to thinking this through in my head.
With a problem of this nature if you really need to push the limits you should consider using C/C++ so you can be as efficient as possible with RAM usage and because you're doing an insane number of tiny things which doesn't map well to python performance.
I have several int16 streams in strings and I want them sum together (without overflow) and return it as an int16 string. Background is mixing several wave files into one stream.
decodeddata1 = numpy.fromstring(data, numpy.int16)
decodeddata2 = numpy.fromstring(data2, numpy.int16)
newdata = decodeddata1 + decodeddata2
return newdata.tostring()
Is there a way doing this with numpy or is there another library?
Processing each single value in python is too slow and results in stutter.
The most important thing is performance, since this code is used in a callback method feeding the audio.
#edit:
test input data:
a = np.int16([20000,20000,-20000,-20000])
b = np.int16([10000,20000,-10000,-20000])
print a + b --> [ 30000 -25536 -30000 25536]
but I want to keep the maximum levels:
[ 30000 40000 -30000 -40000]
The obvious consequence of mixing two signals together with a dynamic range of -32768<x<32767 is a resulting signal of with range of -65537<x<65536 - which requires 17 bits to represent it.
To avoid clipping, you will need to gain-scale the inputs - the obvious way is to divide the sum (or both of the inputs) by 2.
numpy looks as thought it should be quite fast for this - at least faster than python's builtin variable-size integer type. If the additional arithmetic is a performance concern, you should consider your choice of language.
Have a look at this image:
In my application I receive from an iterator an arbitrary amount (let's say 1000 for now) of big 1-dimensional arrays arr1, arr2, arr3, ..., arr1000 (10000 entries each). Each entry is an integer between 0 and n, where in this case n = 9. My ultimate goal is to compute a 1-dimensional array result such that result[i] == the mode of arr1[i], arr2[i], arr3[i], ..., arr1000[i].
However, it is not tractable to concatenate the arrays to one big matrix and then compute the mode row-wise, since this may exceed the RAM on my machine.
An alternative would be to set up an array res2 of shape (10000, 10), then loop through every array, use each entry e as index and then to increase the value of res2[i][e] by 1. Alter looping, I would apply something like argmax. However, this is too slow.
So: Is the a way to perform the task in a fast way, maybe by using NumPy's advanced indexing?
EDIT (due to the comments):
This is basically the code which calculates the modes row-wise – avoiding to concatenate the arrays:
def foo(length, n):
counts = np.zeros((length, n), dtype=np.int_)
for arr in array_iterator():
i = 0
for e in arr:
counts[i][e] += 1
i += 1
return np.argmax(counts, axis=1)
It takes already 60 seconds for 100 arrays of size 10000 (although there is more work done behind the scenes, which results into that time – however, this work scales linearly with the amount of arrays).
Regarding the real sizes:
The amount of different arrays is really arbitrary. It's a parameter of experiments and I'd like to have the opportunity even to set this to values like 10^6. The length of each array is depending of my data set I'm working with. This could be 10000, or 100000 or even worse. However – spitting this into smaller pieces may be possible, though annoying.
My free RAM for this task is about 4 GB.
EDIT 2:
The running time I gave above leads to a wrong impression. Actually, the running time which just belongs to the inner loop (for e in arr) in the above mentioned scenario is just 5 seconds – which is now ok for me, since it's negligible compared to the remaining running time. I will leave this question open anyway for a moment, since there might be an even faster method waiting out there.
i am new to python. I am running the following code and it is giving memory error with python2.7.
Since I am using opencv therefore I am working with python2.7. I have read the previous posts but I am not understanding much from them.
s={}
ns={}
ts={}
for i in range(0,256): #for red component
for j in range(0,256): #for green component
for k in range(0,256): # for blue component
s[(i,j,k)]=0
ns[(i,j,k)]=0
ts[(i,j,k)]=i*j*k
Please help. The code tries to store the frequency of red, green and blue components. And for that I am inititializing these values to zero
Thing 1: use itertools instead of constructing all the range lists each time around the loop. xrange will return an iterator object like range, and product will return an iterator choosing tuples of elements from the given iterable.
Thing 2: use numpy for large data. It's a matrix implementation designed for this sort of thing.
>>> import numpy as np
>>> from itertools import product
>>> x=np.zeros((256,256,256))
>>> for i, j, k in product(xrange(256), repeat=3):
... x[i,j,k]= i*j*k
...
Takes about five seconds for me, and the expected amount of memory.
$ cat /proc/27240/status
Name: python
State: S (sleeping)
...
VmPeak: 420808 kB
VmSize: 289732 kB
Note that you may actually run into system-wide memory limits if you try to allocate three 256*256*256 arrays, since each one has about 17 million entries. Fortunately numpy lets you persist arrays to disk.
Have you come across the PIL (Python Imaging Library)? You may find it helpful.
As a matter of fact, your program needs at least(!) 300*300*300*4*3 bytes solely for the value data of the dicts. Besides, your key tuples occupy 300*300*300*3*3*4 bytes.
This is in total 1296000000 bytes, or 1.2 GiB of data.
This calculation doesn't even include the overhead of maintaining the data in the dict.
So it depends on the amount of memory which your machine has if it fails or not.
You could do a first step and do
s = {}
ns = {}
ts = {}
for i in range(0, 300):
for j in range(0, 300):
for k in range(0, 300):
index=(i, j, k)
s[index]=j
ns[index]=k
ts[index]=i*j*k
which (in theory) will only occupy half the memory as before - as well, only for the data, as the index tuples are reused.
From what you describe (you want a mere counting), you don't need the full range of combinations to be pre-initialized. So you can omit your initialization shown in the question and instead build a storage where you only store these values where you actually have data, which are supposedly much fewer than possible.
You either could use a defaultdict() or imitate its behavoiur manually, as I think that most of the combinations are not used in your color "palette".
from collections import defaultdict
make0 = lambda: 0
s = defaultdict(make0)
ns = defaultdict(make0)
# what is ts? do you need it?
Now you have three dict-like objects which can be written to if needed. Then, for every combination of colors which you really have, you can do s[index] += 1 resp. ns[index] += 1.
I don't know about your ts - maybe you either can calculate it, or you'll have to find a different solution.
Even if all your variables used a single byte, that program would need 405 MB of RAM.
You should use compression to store more in limited space.
Edit: If you want to make a histogram in Python, see this nice example of using the Python Image Library (PIL). The hard work is done with these 3 lines:
import Image
img = Image.open(imagepath)
hist = img.histogram()