Consider a regular matrix that represents nodes numbered as shown in the figure:
I want to make a list with all the triangles represented in the figure. Which would result in the following 2 dimensional list:[[0,1,4],[1,5,4],[1,2,5],[2,6,5],...,[11,15,14]]
Assuming that the dimensions of the matrix are (NrXNc) ((4X4) in this case), I was able to achieve this result with the following code:
def MakeFaces(Nr,Nc):
Nfaces=(Nr-1)*(Nc-1)*2
Faces=np.zeros((Nfaces,3),dtype=np.int32)
for r in range(Nr-1):
for c in range(Nc-1):
fi=(r*(Nc-1)+c)*2
l1=r*Nc+c
l2=l1+1
l3=l1+Nc
l4=l3+1
Faces[fi]=[l1,l2,l3]
Faces[fi+1]=[l2,l4,l3]
return Faces
However, the double loop operations make this approach quite slow. Is there a way of using numpy in a smart way to do this faster?
We could play a multi-dimensional game based on slicing and multi-dim assignment that are perfect in NumPy environment on efficiency -
def MakeFacesVectorized1(Nr,Nc):
out = np.empty((Nr-1,Nc-1,2,3),dtype=int)
r = np.arange(Nr*Nc).reshape(Nr,Nc)
out[:,:, 0,0] = r[:-1,:-1]
out[:,:, 1,0] = r[:-1,1:]
out[:,:, 0,1] = r[:-1,1:]
out[:,:, 1,1] = r[1:,1:]
out[:,:, :,2] = r[1:,:-1,None]
out.shape =(-1,3)
return out
Runtime test and verification -
In [226]: Nr,Nc = 100, 100
In [227]: np.allclose(MakeFaces(Nr, Nc), MakeFacesVectorized1(Nr, Nc))
Out[227]: True
In [228]: %timeit MakeFaces(Nr, Nc)
100 loops, best of 3: 11.9 ms per loop
In [229]: %timeit MakeFacesVectorized1(Nr, Nc)
10000 loops, best of 3: 133 µs per loop
In [230]: 11900/133.0
Out[230]: 89.47368421052632
Around 90x speedup for Nr, Nc = 100, 100!
You can achieve a similar result without any explicit loops if you recast the problem correctly. One way would be to imagine the result as three arrays, each containing one of the vertices: first, second and third. You can then zip up or otherwise convert the arrays into whatever format you like in a fairly inexpensive operation.
You start with the actual matrix. This will make indexing and selecting elements much easier:
m = np.arange(Nr * Nc).reshape(Nr, Nc)
The first array will contain all the 90-degree corners:
c1 = np.concatenate((m[:-1, :-1].ravel(), m[1:, 1:].ravel()))
m[:-1, :-1] are the corners that are at the top, m[1:, 1:] are the corners that are at the bottom.
The second array will contain the corresponding top acute corners:
c2 = np.concatenate((m[:-1, 1:].ravel(), m[:-1, 1:].ravel()))
And the third array will contain the bottom corners:
c2 = np.concatenate((m[1:, :-1].ravel(), m[1:, :-1].ravel()))
You can now get an array like your original one back by zipping:
faces = list(zip(c1, c2, c3))
I am sure that you can find ways to improve this algorithm, but it is a start.
Related
I am reading an image captured through opencv and want to map a function to every pixel value in the image. The output is an m x n x 3 numpy array, where m and n are the coordinates of length and width of the image and the three values are the corresponding blue, green, and red values for each pixel.
I first thought to run a nested for loop to each value in the image. However, it takes a long time to run, so I am looking for a more efficient way to loop over the image quickly.
Here is the nested for loop:
a = list()
for row in img:
for col in row:
a.append(np.sqrt(np.prod(col[1:])))
adjusted = np.asarray(a).reshape((img.shape[0], img.shape[1]))
This code works, but I would like to make it run faster. I know vectorization could be an option, but I do not know how to apply it onto only part of an array and not a whole array. To do this, I think I could reshape it to img.reshape((np.prod(img.shape[:2]),3)) and then loop over each set of three values, but I do not know the correct function/iterator to use.
Also, if opencv/numpy/scipy has another function that does just this, it would be a great help. I'm also open to other options, but I wanted to give some ideas that I had.
In the end, I want to take the input and calculate the geometric mean of the red and green values and create an n x m array of the geometric means. Any help would be appreciated!
This can be vectorized using the axis parameter in np.prod(). Setting axis=-1 will cause the product to only be taken on the last axis.
To perform this product on only the last two channels, index the array to extract only those channels using img[..., 1:]
You can replace your code with the following line:
adjusted = np.sqrt(np.prod(img[..., 1:], axis=-1))
For fun, let's profile these two functions using some simulated data:
import numpy as np
img = np.random.random((100,100,3))
def original_function(img):
a = []
for row in img:
for col in row:
a.append(np.sqrt(np.prod(col[1:])))
adjusted = np.asarray(a).reshape((img.shape[0], img.shape[1]))
return adjusted
def improved_function(img):
return np.sqrt(np.prod(img[:,:,1:], axis=-1))
>>> %timeit -n 100 original_function(img)
100 loops, best of 3: 55.5 ms per loop
>>> %timeit -n 100 improved_function(img)
100 loops, best of 3: 115 µs per loop
500x improvement in speed! The beauty of numpy vectorization :)
I've implemented a k-means clustering algorithm in python, and now I want to label a new data with the clusters I got with my algorithm. My approach is to iterate through every data point and every centroid to find the minimum distance and the centroid associated with it. But I wonder if there are simpler or shorter ways to do it.
def assign_cluster(clusterDict, data):
clusterList = []
label = []
cen = list(clusterDict.values())
for i in range(len(data)):
for j in range(len(cen)):
# if cen[j] has the minimum distance with data[i]
# then clusterList[i] = cen[j]
Where clusterDict is a dictionary with keys as labels, [0,1,2,....] and values as coordinates of centroids.
Can someone help me implementing this?
This is a good use case for numba, because it lets you express this as a simple double loop without a big performance penalty, which in turn allows you to avoid the excessive extra memory of using np.tile to replicate the data across a third dimension just to do it in a vectorized manner.
Borrowing the standard vectorized numpy implementation from the other answer, I have these two implementations:
import numba
import numpy as np
def kmeans_assignment(centroids, points):
num_centroids, dim = centroids.shape
num_points, _ = points.shape
# Tile and reshape both arrays into `[num_points, num_centroids, dim]`.
centroids = np.tile(centroids, [num_points, 1]).reshape([num_points, num_centroids, dim])
points = np.tile(points, [1, num_centroids]).reshape([num_points, num_centroids, dim])
# Compute all distances (for all points and all centroids) at once and
# select the min centroid for each point.
distances = np.sum(np.square(centroids - points), axis=2)
return np.argmin(distances, axis=1)
#numba.jit
def kmeans_assignment2(centroids, points):
P, C = points.shape[0], centroids.shape[0]
distances = np.zeros((P, C), dtype=np.float32)
for p in range(P):
for c in range(C):
distances[p, c] = np.sum(np.square(centroids[c] - points[p]))
return np.argmin(distances, axis=1)
Then for some sample data, I did a few timing experiments:
In [12]: points = np.random.rand(10000, 50)
In [13]: centroids = np.random.rand(30, 50)
In [14]: %timeit kmeans_assignment(centroids, points)
196 ms ± 6.78 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [15]: %timeit kmeans_assignment2(centroids, points)
127 ms ± 12.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
I won't go as far to say that the numba version is certainly faster than the np.tile version, but clearly it's very close while not incurring the extra memory cost of np.tile.
In fact, I noticed for my laptop that when I make the shapes larger and use (10000, 1000) for the shape of points and (200, 1000) for the shape of centroids, then np.tile generated a MemoryError, meanwhile the numba function runs in under 5 seconds with no memory error.
Separately, I actually noticed a slowdown when using numba.jit on the first version (withnp.tile), which is likely due to the extra array creation inside the jitted function combined with the fact that there's not much numba can optimize when you're already calling all vectorized functions.
And I also did not notice any significant improvement in the second version when trying to shorten the code by using broadcasting. E.g. shortening the double loop to be
for p in range(P):
distances[p, :] = np.sum(np.square(centroids - points[p, :]), axis=1)
did not really help anything (and would use more memory when repeatedly broadcasting points[p, :] across all of centroids).
This is one of the really nice benefits of numba. You really can write the algorithms in a very straightforward, loop-based way that comports with standard descriptions of algorithms and allows finer point of control over how the syntax unpacks into memory consumption or broadcasting... all without giving up runtime performance.
An efficient way to perform assignment phase is by doing vectorized computation. This approach assumes that you start with two 2D arrays: points and centroids, with the same number of columns (dimensionality of space), but possibly different number of rows. By using tiling (np.tile) we can then compute the distance matrix in a batch, then select the closest clusters per each point.
Here's the code:
def kmeans_assignment(centroids, points):
num_centroids, dim = centroids.shape
num_points, _ = points.shape
# Tile and reshape both arrays into `[num_points, num_centroids, dim]`.
centroids = np.tile(centroids, [num_points, 1]).reshape([num_points, num_centroids, dim])
points = np.tile(points, [1, num_centroids]).reshape([num_points, num_centroids, dim])
# Compute all distances (for all points and all centroids) at once and
# select the min centroid for each point.
distances = np.sum(np.square(centroids - points), axis=2)
return np.argmin(distances, axis=1)
See this GitHub gist for a complete runnable example.
I am working on a python project and making use of numpy. I frequently have to compute Kronecker products of matrices by the identity matrix. These are a pretty big bottleneck in my code so I would like to optimize them. There are two kinds of products I have to take. The first one is:
np.kron(np.eye(N), A)
This one is pretty easy to optimize by simply using scipy.linalg.block_diag. The product is equivalent to:
la.block_diag(*[A]*N)
Which is about 10 times faster. However, I am unsure on how to optimize the second kind of product:
np.kron(A, np.eye(N))
Is there a similar trick I can use?
One approach would be to initialize an output array of 4D and then assign values into it from A. Such an assignment would broadcast values and this is where we would get efficiency in NumPy.
Thus, a solution would be like so -
# Get shape of A
m,n = A.shape
# Initialize output array as 4D
out = np.zeros((m,N,n,N))
# Get range array for indexing into the second and fourth axes
r = np.arange(N)
# Index into the second and fourth axes and selecting all elements along
# the rest to assign values from A. The values are broadcasted.
out[:,r,:,r] = A
# Finally reshape back to 2D
out.shape = (m*N,n*N)
Put as a function -
def kron_A_N(A, N): # Simulates np.kron(A, np.eye(N))
m,n = A.shape
out = np.zeros((m,N,n,N),dtype=A.dtype)
r = np.arange(N)
out[:,r,:,r] = A
out.shape = (m*N,n*N)
return out
To simulate np.kron(np.eye(N), A), simply swap the operations along the first and second and similarly for third and fourth axes -
def kron_N_A(A, N): # Simulates np.kron(np.eye(N), A)
m,n = A.shape
out = np.zeros((N,m,N,n),dtype=A.dtype)
r = np.arange(N)
out[r,:,r,:] = A
out.shape = (m*N,n*N)
return out
Timings -
In [174]: N = 100
...: A = np.random.rand(100,100)
...:
In [175]: np.allclose(np.kron(A, np.eye(N)), kron_A_N(A,N))
Out[175]: True
In [176]: %timeit np.kron(A, np.eye(N))
1 loops, best of 3: 458 ms per loop
In [177]: %timeit kron_A_N(A, N)
10 loops, best of 3: 58.4 ms per loop
In [178]: 458/58.4
Out[178]: 7.842465753424658
Hi I'm relatively new here and trying to do some calculations with numpy. I'm experiencing a long elapse time from one particular calculation and can't work out any faster way to achieve the same thing.
Basically its part of a ray triangle intersection algorithm and I need to calculate all the vector cros products from two matrices of different sizes.
The code I was using was :
allhvals1 = numpy.cross( dirvectors[:,None,:], trivectors2[None,:,:] )
where dirvectors is an array of n* vectors (xyz) and trivectors2 is an array of m*vectors(xyz). allhvals1 is an array of the cross products of size n*M*vector (xyz).
This works but is very slow. It's essentially the n*m matrix of each vector from each array. Hope that you understand. The sizes of each varies from approx 1-4000 depending on parameters (I basically chunk the dirvectors dependent on size).
Any advice appreciated. Unfortunately my matrix math is somewhat flakey.
If you look at the source code of np.cross, it basically moves the xyz dimension to the front of the shape tuple for all arrays, and then has the calculation of each of the components spelled out like this:
x = a[1]*b[2] - a[2]*b[1]
y = a[2]*b[0] - a[0]*b[2]
z = a[0]*b[1] - a[1]*b[0]
In your case, each of those products requires allocating huge arrays, so the overall behavior is not very efficient.
Lets set up some test data:
u = np.random.rand(1000, 3)
v = np.random.rand(2000, 3)
In [13]: %timeit s1 = np.cross(u[:, None, :], v[None, :, :])
1 loops, best of 3: 591 ms per loop
We can try to compute it using Levi-Civita symbols and np.einsum as follows:
eijk = np.zeros((3, 3, 3))
eijk[0, 1, 2] = eijk[1, 2, 0] = eijk[2, 0, 1] = 1
eijk[0, 2, 1] = eijk[2, 1, 0] = eijk[1, 0, 2] = -1
In [14]: %timeit s2 = np.einsum('ijk,uj,vk->uvi', eijk, u, v)
1 loops, best of 3: 706 ms per loop
In [15]: np.allclose(s1, s2)
Out[15]: True
So while it works, it has worse performance. The thing is that np.einsum has trouble when there are more than two operands, but has optimized pathways for two or less. So we can try to rewrite it in two steps, to see if it helps:
In [16]: %timeit s3 = np.einsum('iuk,vk->uvi', np.einsum('ijk,uj->iuk', eijk, u), v)
10 loops, best of 3: 63.4 ms per loop
In [17]: np.allclose(s1, s3)
Out[17]: True
Bingo! Close to an order of magnitude improvement...
Some performance figures for NumPy 1.11.0 with a=numpy.random.rand(n,3), b=numpy.random.rand(n,3):
The nested einsum is about twice as fast as cross for the largest n tested.
While writing dynamic simulations for underwater vehicles I have found this method for fast cross product:
https://github.com/simena86/Simulink-Underwater-Robotics-Simulator/blob/master/3rdparty/gnc_mfiles/Smtrx.m
Which works well, it is written in Matlab but the code is very simple. Just read the comments at the top.
EDIT I have kept the more complicated problem I am facing below, but my problems with np.take can be summarized better as follows. Say you have an array img of shape (planes, rows), and another array lut of shape (planes, 256), and you want to use them to create a new array out of shape (planes, rows), where out[p,j] = lut[p, img[p, j]]. This can be achieved with fancy indexing as follows:
In [4]: %timeit lut[np.arange(planes).reshape(-1, 1), img]
1000 loops, best of 3: 471 us per loop
But if, instead of fancy indexing, you use take and a python loop over the planes things can be sped up tremendously:
In [6]: %timeit for _ in (lut[j].take(img[j]) for j in xrange(planes)) : pass
10000 loops, best of 3: 59 us per loop
Can lut and img be in someway rearranged, so as to have the whole operation happen without python loops, but using numpy.take (or an alternative method) instead of conventional fancy indexing to keep the speed advantage?
ORIGINAL QUESTION
I have a set of look-up tables (LUTs) that I want to use on an image. The array holding the LUTs is of shape (planes, 256, n), and the image has shape (planes, rows, cols). Both are of dtype = 'uint8', matching the 256 axis of the LUT. The idea is to run the p-th plane of the image through each of the n LUTs from the p-th plane of the LUT.
If my lut and img are the following:
planes, rows, cols, n = 3, 4000, 4000, 4
lut = np.random.randint(-2**31, 2**31 - 1,
size=(planes * 256 * n // 4,)).view('uint8')
lut = lut.reshape(planes, 256, n)
img = np.random.randint(-2**31, 2**31 - 1,
size=(planes * rows * cols // 4,)).view('uint8')
img = img.reshape(planes, rows, cols)
I can achieve what I am after using fancy indexing like this
out = lut[np.arange(planes).reshape(-1, 1, 1), img]
which gives me an array of shape (planes, rows, cols, n) , where out[i, :, :, j] holds the i-th plane of img run through the j-th LUT of the i-th plane of the LUT...
All is good, except for this:
In [2]: %timeit lut[np.arange(planes).reshape(-1, 1, 1), img]
1 loops, best of 3: 5.65 s per loop
which is completely unacceptable, especially since I have all of the following not so nice looking alternatives using np.take than run much faster:
A single LUT on a single plane runs about x70 faster:
In [2]: %timeit np.take(lut[0, :, 0], img[0])
10 loops, best of 3: 78.5 ms per loop
A python loop running through all the desired combinations finishes almost x6 faster:
In [2]: %timeit for _ in (np.take(lut[j, :, k], img[j]) for j in xrange(planes) for k in xrange(n)) : pass
1 loops, best of 3: 947 ms per loop
Even running all combinations of planes in the LUT and image and then discarding the planes**2 - planes unwanted ones is faster than fancy indexing:
In [2]: %timeit np.take(lut, img, axis=1)[np.arange(planes), np.arange(planes)]
1 loops, best of 3: 3.79 s per loop
And the fastest combination I have been able to come up with has a python loop iterating over the planes and finishes x13 faster:
In [2]: %timeit for _ in (np.take(lut[j], img[j], axis=0) for j in xrange(planes)) : pass
1 loops, best of 3: 434 ms per loop
The question of course is if there is no way of doing this with np.take without any python loop? Ideally whatever reshaping or resizing is needed should happen on the LUT, not the image, but I am open to whatever you people can come up with...
Fist of all I have to say I really liked your question. Without rearranging LUT or IMG the following solution worked:
%timeit a=np.take(lut, img, axis=1)
# 1 loops, best of 3: 1.93s per loop
But from the result you have to query the diagonal: a[0,0], a[1,1], a[2,2]; to get what you want. I've tried to find a way to do this indexing only for the diagonal elements, but still did not manage.
Here are some ways to rearrange your LUT and IMG:
The following works if the indexes in IMG are from 0-255, for the 1st plane, 256-511 for the 2nd plane, and 512-767 for the 3rd plane, but that would prevent you from using 'uint8', which can be a big issue...:
lut2 = lut.reshape(-1,4)
%timeit np.take(lut2,img,axis=0)
# 1 loops, best of 3: 716 ms per loop
# or
%timeit np.take(lut2, img.flatten(), axis=0).reshape(3,4000,4000,4)
# 1 loops, best of 3: 709 ms per loop
in my machine your solution is still the best option, and very adequate since you just need the diagonal evaluations, i.e. plane1-plane1, plane2-plane2 and plane3-plane3:
%timeit for _ in (np.take(lut[j], img[j], axis=0) for j in xrange(planes)) : pass
# 1 loops, best of 3: 677 ms per loop
I hope this can give you some insight about a better solution. It would be nice to look for more options with flatten(), and similar methods as np.apply_over_axes() or np.apply_along_axis(), that seem to be promising.
I used this code below to generate the data:
import numpy as np
num = 4000
planes, rows, cols, n = 3, num, num, 4
lut = np.random.randint(-2**31, 2**31-1,size=(planes*256*n//4,)).view('uint8')
lut = lut.reshape(planes, 256, n)
img = np.random.randint(-2**31, 2**31-1,size=(planes*rows*cols//4,)).view('uint8')
img = img.reshape(planes, rows, cols)