Can I vectorize this Python code? - python

I'm kind of new to Python and I have to implement "fast as possible" version of this code.
s="<%dH" % (int(width*height),)
z=struct.unpack(s, contents)
heights = np.zeros((height,width))
for r in range(0,height):
for c in range(0,width):
elevation=z[((width)*r)+c]
if (elevation==65535 or elevation<0 or elevation>20000):
elevation=0.0
heights[r][c]=float(elevation)
I've read some of the python vectorization questions... but I don't think it applies to my case. Most of the questions are things like using np.sum instead of for loops. I guess I have two questions:
Is it possible to speed up this code...I think heights[r][c]=float(elevation) is where the bottleneck is. I need to find some Python timing commands to confirm this.
If it possible to speed up this code. What are my options? I have seen some people recommend cython, pypy, weave. I could do this faster in C but this code also need to generate plots so I'd like to stick with Python so I can use matplotlib.

As you mention, the key to writing fast code with numpy involves vectorization, and pushing the work off to fast C-level routines instead of Python loops. The usual approach seems to improve things by a factor of ten or so relative to your original code:
def faster(elevation, height, width):
heights = np.array(elevation, dtype=float)
heights = heights.reshape((height, width))
heights[(heights < 0) | (heights > 20000)] = 0
return heights
>>> h,w = 100, 101; z = list(range(h*w))
>>> %timeit orig(z,h,w)
100 loops, best of 3: 9.71 ms per loop
>>> %timeit faster(z,h,w)
1000 loops, best of 3: 641 µs per loop
>>> np.allclose(orig(z,h,w), faster(z,h,w))
True
That ratio seems to hold even for longer z:
>>> h,w = 1000, 10001; z = list(range(h*w))
>>> %timeit orig(z,h,w)
1 loops, best of 3: 9.44 s per loop
>>> %timeit faster(z,h,w)
1 loops, best of 3: 675 ms per loop

Related

slice assignment slower using memoryview (Python 3.5.0)

I have a large bytearray x and want to assign a slice of it to a slice of another bytearray y
x = bytearray(10**7) #something else in practice
y = bytearray(6*10**6)
y[::6] = x[:2*10**6:2]
I figured using memoryview would be faster, and indeed
memoryview(x)[:2*10**6:2]
is very fast. However,
y[::6] = memoryview(x)[:2*10**6:2]
takes 5 times as long as y[::6] = x[:2*10**6:2]
Am I missing something, or is this slowdown a bug in Python?
What is the fastest way to do this in Python (a) if I want to repeatedly assign a known number of 0's, and (b) in general?
The slowdown is not so much a bug, but that memoryview and the buffer protocol are still relatively new to and are poorly optimised. The underlying code to y[::6] = memoryview(x)[:2*10**6:2] creates a contiguous copy of the bytearray before copying it over. Meaning it will be slower than directly creating and assigning a normal slice of the bytearray. Indeed, in this particular instance (on my machine), using a memoryview is closer in speed to using y[::6] = islice(x, None, 2*10**6, 2) than direct assignment.
numpy has existed for much longer and is much better optimised for the types of operations you are interested in doing.
Using ipython:
In [1]: import numpy as np; from itertools import islice
In [2]: x = bytearray(10**7)
In [3]: y = bytearray(6*10**6)
In [4]: x_np = np.array(x)
In [5]: y_np = np.array(y)
In [6]: %timeit y[::6] = memoryview(x)[:2*10**6:2]
100 loops, best of 3: 10.9 ms per loop
In [7]: %timeit y[::6] = x[:2*10**6:2]
1000 loops, best of 3: 1.65 ms per loop
In [8]: %timeit y[::6] = islice(x, None, 2*10**6, 2)
10 loops, best of 3: 22.9 ms per loop
In [9]: %timeit y_np[::6] = x_np[:2*10**6:2]
1000 loops, best of 3: 911 µs per loop
The last two have the added benefit of having very little memory overhead.

Fast indexed dot-product for numpy/scipy

I'm using numpy to do linear algebra. I want to do fast subset-indexed dot and other linear operations.
When dealing with big matrices, slicing solution like A[:,subset].dot(x[subset]) may be longer than doing the multiplication on the full matrix.
A = np.random.randn(1000,10000)
x = np.random.randn(10000,1)
subset = np.sort(np.random.randint(0,10000,500))
Timings show that sub-indexing can be faster when columns are in one block.
%timeit A.dot(x)
100 loops, best of 3: 4.19 ms per loop
%timeit A[:,subset].dot(x[subset])
100 loops, best of 3: 7.36 ms per loop
%timeit A[:,:500].dot(x[:500])
1000 loops, best of 3: 1.75 ms per loop
Still the acceleration is not what I would expect (20x faster!).
Does anyone know an idea of a library/module that allow these kind of fast operation through numpy or scipy?
For now on I'm using cython to code a fast column-indexed dot product through the cblas library. But for more complex operation (pseudo-inverse, or subindexed least square solving) I'm not shure to reach good acceleration.
Thanks!
Well, this is faster.
%timeit A.dot(x)
#4.67 ms
%%timeit
y = numpy.zeros_like(x)
y[subset]=x[subset]
d = A.dot(y)
#4.77ms
%timeit c = A[:,subset].dot(x[subset])
#7.21ms
And you have all(d-ravel(c)==0) == True.
Notice that how fast this is depends on the input. With subset = array([1,2,3]) you have that the time of my solution is pretty much the same, while the timing of the last solution is 46micro seconds.
Basically this will be faster if the size ofsubset is not much smaller than the size of x

Speeding up all-to-all comparisons with a lookup table on Numpy and/or Pandas

I have two Pandas dataframes, with some common information between them
n_classes = 100
classes = range(n_classes)
activity_data = pd.DataFrame(columns=['Class','Activity'], data=list(zip(classes,rand(n_classes))))
weight_lookuptable = pd.DataFrame(index=classes, columns=classes, data=rand(n_classes,n_classes))
#Important for comprehension: the classes are both the indices and the columns. Every class has a relationship with every other class.
I then want to perform this operation:
q =[sum(activity_data['Activity']*activity_data['Class'].map(weight_lookuptable[c])) for c in activity_data['Class']]
Description: For every class, look up that class' class-to-class weights in the lookup table, and multiply them by their respective classes. Then sum.
Is there a smarter way to do this so as to be faster? It's pretty fast now, but I'll be doing this millions of times and could really an order of magnitude or two reduction.
Maybe there is something clever with making activity_data['Class'] and index. But obviously the biggest opportunity for gains would be to not have the for c in activity_data['Class'] component. I just don't see how to do it.
IIUC, you could use dot, I think:
>>> q = [sum(activity_data['Activity']*activity_data['Class'].map(weight_lookuptable[c])) for c in activity_data['Class']]
>>> new_q = activity_data["Activity"].dot(weight_lookuptable)
>>> np.allclose(q, new_q)
True
which is much faster for me:
>>> %timeit q = [sum(activity_data['Activity']*activity_data['Class'].map(weight_lookuptable[c])) for c in activity_data['Class']]
10 loops, best of 3: 28.8 ms per loop
>>> %timeit new_q = activity_data["Activity"].dot(weight_lookuptable)
1000 loops, best of 3: 218 µs per loop
You can sometimes squeeze out a bit more performance by dropping to bare numpy (although then you have to be more careful to make sure that your indices are aligned):
>>> %timeit new_q = activity_data["Activity"].values.dot(weight_lookuptable.values)
10000 loops, best of 3: 43.4 µs per loop

sign() much slower in python than matlab?

I have a function in python that basically takes the sign of an array (75,150), for example.
I'm coming from Matlab and the time execution looks more or less the same less this function.
I'm wondering if sign() works very slowly and you know an alternative to do the same.
Thx,
I can't tell you if this is faster or slower than Matlab, since I have no idea what numbers you're seeing there (you provided no quantitative data at all). However, as far as alternatives go:
import numpy as np
a = np.random.randn(75, 150)
aSign = np.sign(a)
Testing using %timeit in IPython:
In [15]: %timeit np.sign(a)
10000 loops, best of 3: 180 µs per loop
Because the loop over the array (and what happens inside it) is implemented in optimized C code rather than generic Python code, it tends to be about an order of magnitude faster—in the same ballpark as Matlab.
Comparing the exact same code as a numpy vectorized operation vs. a Python loop:
In [276]: %timeit [np.sign(x) for x in a]
1000 loops, best of 3: 276 us per loop
In [277]: %timeit np.sign(a)
10000 loops, best of 3: 63.1 us per loop
So, only 4x as fast here. (But then a is pretty small here.)

Are Numpy functions slow?

Numpy is supposed to be fast. However, when comparing Numpy ufuncs with standard Python functions I find that the latter are much faster.
For example,
aa = np.arange(1000000, dtype = float)
%timeit np.mean(aa) # 1000 loops, best of 3: 1.15 ms per loop
%timeit aa.mean # 10000000 loops, best of 3: 69.5 ns per loop
I got similar results with other Numpy functions like max, power. I was under the impression that Numpy has an overhead that makes it slower for small arrays but would be faster for large arrays. In the code above aa is not small: it has 1 million elements. Am I missing something?
Of course, Numpy is fast, only the functions seem to be slow:
bb = range(1000000)
%timeit mean(bb) # 1 loops, best of 3: 551 ms per loop
%timeit mean(list(bb)) # 10 loops, best of 3: 136 ms per loop
Others already pointed out that your comparison is not a real comparison (you are not calling the function + both are numpy).
But to give an answer to the question "Are numpy function slow?": generally speaking, no, numpy function are not slow (or not slower than plain python function). Off course there are some side notes to make:
'Slow' depends off course on what you compare with, and it can always faster. With things like cython, numexpr, numba, calling C-code, ... and others it is in many cases certainly possible to get faster results.
Numpy has a certain overhead, which can be significant in some cases. For example, as you already mentioned, numpy can be slower on small arrays and scalar math. For a comparison on this, see eg Are NumPy's math functions faster than Python's?
To make the comparison you wanted to make:
In [1]: import numpy as np
In [2]: aa = np.arange(1000000)
In [3]: bb = range(1000000)
For the mean (note, there is no mean function in python standard library: Calculating arithmetic mean (average) in Python):
In [4]: %timeit np.mean(aa)
100 loops, best of 3: 2.07 ms per loop
In [5]: %timeit float(sum(bb))/len(bb)
10 loops, best of 3: 69.5 ms per loop
For max, numpy vs plain python:
In [6]: %timeit np.max(aa)
1000 loops, best of 3: 1.52 ms per loop
In [7]: %timeit max(bb)
10 loops, best of 3: 31.2 ms per loop
As a final note, in the above comparison I used a numpy array (aa) for the numpy functions and a list (bb) for the plain python functions. If you would use a list with numpy functions, in this case it would again be slower:
In [10]: %timeit np.max(bb)
10 loops, best of 3: 115 ms per loop
because the list is first converted to an array (which consumes most of the time). So, if you want to rely on numpy in your application, it is important to make use of numpy arrays to store you data (or if you have a list, convert it to an array so this conversion has to be done only once).
You're not calling aa.mean. Put the function call parentheses on the end, to actually call it, and the speed difference will nearly vanish. (Both np.mean(aa) and aa.mean() are NumPy; neither uses Python builtins to do the math.)

Categories