I need help vectorizing this code. Right now, with N=100, its takes a minute or so to run. I would like to speed that up. I have done something like this for a double loop, but never with a 3D loop, and I am having difficulties.
import numpy as np
N = 100
n = 12
r = np.sqrt(2)
x = np.arange(-N,N+1)
y = np.arange(-N,N+1)
z = np.arange(-N,N+1)
C = 0
for i in x:
for j in y:
for k in z:
if (i+j+k)%2==0 and (i*i+j*j+k*k!=0):
p = np.sqrt(i*i+j*j+k*k)
p = p/r
q = (1/p)**n
C += q
print '\n'
print C
The meshgrid/where/indexing solution is already extremely fast. I made it about 65 % faster. This is not too much, but I explain it anyway, step by step:
It was easiest for me to approach this problem with all 3D vectors in the grid being columns in one large 2D 3 x M array. meshgrid is the right tool for creating all the combinations (note that numpy version >= 1.7 is required for a 3D meshgrid), and vstack + reshape bring the data into the desired form. Example:
>>> np.vstack(np.meshgrid(*[np.arange(0, 2)]*3)).reshape(3,-1)
array([[0, 0, 1, 1, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 1, 0, 1, 0, 1, 0, 1]])
Each column is one 3D vector. Each of these eight vectors represents one corner of a 1x1x1 cube (a 3D grid with step size 1 and length 1 in all dimensions).
Let's call this array vectors (it contains all 3D vectors representing all points in the grid). Then, prepare a bool mask for selecting those vectors fulfilling your mod2 criterion:
mod2bool = np.sum(vectors, axis=0) % 2 == 0
np.sum(vectors, axis=0) creates an 1 x M array containing the element sum for each column vector. Hence, mod2bool is a 1 x M array with a bool value for each column vector. Now use this bool mask:
vectorsubset = vectors[:,mod2bool]
This selects all rows (:) and uses boolean indexing for filtering the columns, both are fast operations in numpy. Calculate the lengths of the remaining vectors, using the native numpy approach:
lengths = np.sqrt(np.sum(vectorsubset**2, axis=0))
This is quite fast -- however, scipy.stats.ss and bottleneck.ss can perform the squared sum calculation even faster than this.
Transform the lengths using your instructions:
with np.errstate(divide='ignore'):
p = (r/lengths)**n
This involves finite number division by zero, resulting in Infs in the output array. This is entirely fine. We use numpy's errstate context manager for making sure that these zero divisions do not throw an exception or a runtime warning.
Now sum up the finite elements (ignore the infs) and return the sum:
return np.sum(p[np.isfinite(p)])
I have implemented this method two times below. Once exactly like just explained, and once involving bottleneck's ss and nansum functions. I have also added your method for comparison, and a modified version of your method that skips the np.where((x*x+y*y+z*z)!=0) indexing, but rather creates Infs, and finally sums up the isfinite way.
import sys
import numpy as np
import bottleneck as bn
N = 100
n = 12
r = np.sqrt(2)
x,y,z = np.meshgrid(*[np.arange(-N, N+1)]*3)
gridvectors = np.vstack((x,y,z)).reshape(3, -1)
def measure_time(func):
import time
def modified_func(*args, **kwargs):
t0 = time.time()
result = func(*args, **kwargs)
duration = time.time() - t0
print("%s duration: %.3f s" % (func.__name__, duration))
return result
return modified_func
#measure_time
def method_columnvecs(vectors):
mod2bool = np.sum(vectors, axis=0) % 2 == 0
vectorsubset = vectors[:,mod2bool]
lengths = np.sqrt(np.sum(vectorsubset**2, axis=0))
with np.errstate(divide='ignore'):
p = (r/lengths)**n
return np.sum(p[np.isfinite(p)])
#measure_time
def method_columnvecs_opt(vectors):
# On my system, bn.nansum is even slightly faster than np.sum.
mod2bool = bn.nansum(vectors, axis=0) % 2 == 0
# Use ss from bottleneck or scipy.stats (axis=0 is default).
lengths = np.sqrt(bn.ss(vectors[:,mod2bool]))
with np.errstate(divide='ignore'):
p = (r/lengths)**n
return bn.nansum(p[np.isfinite(p)])
#measure_time
def method_original(x,y,z):
ind = np.where((x+y+z)%2==0)
x = x[ind]
y = y[ind]
z = z[ind]
ind = np.where((x*x+y*y+z*z)!=0)
x = x[ind]
y = y[ind]
z = z[ind]
p=np.sqrt(x*x+y*y+z*z)/r
return np.sum((1/p)**n)
#measure_time
def method_original_finitesum(x,y,z):
ind = np.where((x+y+z)%2==0)
x = x[ind]
y = y[ind]
z = z[ind]
lengths = np.sqrt(x*x+y*y+z*z)
with np.errstate(divide='ignore'):
p = (r/lengths)**n
return np.sum(p[np.isfinite(p)])
print method_columnvecs(gridvectors)
print method_columnvecs_opt(gridvectors)
print method_original(x,y,z)
print method_original_finitesum(x,y,z)
This is the output:
$ python test.py
method_columnvecs duration: 1.295 s
12.1318801965
method_columnvecs_opt duration: 1.162 s
12.1318801965
method_original duration: 1.936 s
12.1318801965
method_original_finitesum duration: 1.714 s
12.1318801965
All methods produce the same result. Your method becomes a bit faster when doing the isfinite style sum. My methods are faster, but I would say that this is an exercise of academic nature rather than an important improvement :-)
I have one question left: you were saying that for N=3, the calculation should produce a 12. Even yours doesn't do this. All methods above produce 12.1317530867 for N=3. Is this expected?
Thanks to #Bill, I was able to get this to work. Very fast now. Perhaps could be done better, especially with the two masks to get rid of the two conditions that I originally had for loops for.
from __future__ import division
import numpy as np
N = 100
n = 12
r = np.sqrt(2)
x, y, z = np.meshgrid(*[np.arange(-N, N+1)]*3)
ind = np.where((x+y+z)%2==0)
x = x[ind]
y = y[ind]
z = z[ind]
ind = np.where((x*x+y*y+z*z)!=0)
x = x[ind]
y = y[ind]
z = z[ind]
p=np.sqrt(x*x+y*y+z*z)/r
ans = (1/p)**n
ans = np.sum(ans)
print 'ans'
print ans
Related
I'm doing aperture photometry on a cluster of stars, and to get easier detection of background signal, I want to only look at stars further apart than n pixels (n=16 in my case).
I have 2 arrays, xs and ys, with the x- and y-values of all the stars' coordinates:
Using np.where I'm supposed to find the indexes of all stars, where the distance to all other stars is >= n
So far, my method has been a for-loop
import numpy as np
# Lists of coordinates w. values between 0 and 2000 for 5000 stars
xs = np.random.rand(5000)*2000
ys = np.random.rand(5000)*2000
# for-loop, wherein the np.where statement in question is situated
n = 16
for i in range(len(xs)):
index = np.where( np.sqrt( pow(xs[i] - xs,2) + pow(ys[i] - ys,2)) >= n)
Due to the stars being clustered pretty closely together, I expected a severe reduction in data, though even when I tried n=1000 I still had around 4000 datapoints left
Using just numpy (and part of the answer here)
X = np.random.rand(5000,2) * 2000
XX = np.einsum('ij, ij ->i', X, X)
D_squared = XX[:, None] + XX - 2 * X.dot(X.T)
out = np.where(D_squared.min(axis = 0) > n**2)
Using scipy.spatial.pdist
from scipy.spatial import pdist, squareform
D_squared = squareform(pdist(x, metric = 'sqeuclidean'))
out = np.where(D_squared.min(axis = 0) > n**2)
Using a KDTree for maximum fast:
from scipy.spatial import KDTree
X_tree = KDTree(X)
in_radius = np.array(list(X_tree.query_pairs(n))).flatten()
out = np.where(~np.in1d(np.arange(X.shape[0]), in_radius))
np.random.seed(seed=1)
xs = np.random.rand(5000,1)*2000
ys = np.random.rand(5000,1)*2000
n = 16
mask = (xs>=0)
for i in range(len(xs)):
if mask[i]:
index = np.where( np.sqrt( pow(xs[i] - x,2) + pow(ys[i] - y,2)) <= n)
mask[index] = False
mask[i] = True
x = xs[mask]
y = ys[mask]
print(len(x))
4220
You can use np.subtract.outer for creating the pairwise comparisons. Then you check for each row whether the distance is below 16 for exactly one item (which is the comparison with the particular start itself):
distances = np.sqrt(
np.subtract.outer(xs, xs)**2
+ np.subtract.outer(ys, ys)**2
)
indices = np.nonzero(np.sum(distances < 16, axis=1) == 1)
I'm trying to sum a two dimensional function using the array method, somehow, using a for loop is not outputting the correct answer. I want to find (in latex) $$\sum_{i=1}^{M}\sum_{j=1}^{M_2}\cos(i)\cos(j)$$ where according to Mathematica the answer when M=5 is 1.52725. According to the for loop:
def f(N):
s1=0;
for p1 in range(N):
for p2 in range(N):
s1+=np.cos(p1+1)*np.cos(p2+1)
return s1
print(f(4))
is 0.291927.
I have thus been trying to use some code of the form:
def f1(N):
mat3=np.zeros((N,N),np.complex)
for i in range(0,len(mat3)):
for j in range(0,len(mat3)):
mat3[i][j]=np.cos(i+1)*np.cos(j+1)
return sum(mat3)
which again
print(f1(4))
outputs 0.291927. Looking at the array we should find for each value of i and j a matrix of the form
mat3=[[np.cos(1)*np.cos(1),np.cos(2)*np.cos(1),...],[np.cos(2)*np.cos(1),...]...[np.cos(N+1)*np.cos(N+1)]]
so for N=4 we should have
mat3=[[np.cos(1)*np.cos(1) np.cos(2)*np.cos(1) ...] [np.cos(2)*np.cos(1) ...]...[... np.cos(5)*np.cos(5)]]
but what I actually get is the following
mat3=[[0.29192658+0.j 0.+0.j 0.+0.j ... 0.+0.j] ... [... 0.+0.j]]
or a matrix of all zeros apart from the mat3[0][0] element.
Does anybody know a correct way to do this and get the correct answer? I chose this as an example because the problem I'm trying to solve involves plotting a function which has been summed over two indices and the function that python outputs is not the same as Mathematica (i.e., a function of the form $$f(E)=\sum_{i=1}^{M}\sum_{j=1}^{M_2}F(i,j,E)$$).
The return statement is not indented correctly in your sample code. It returns immediately in the first loop iteration. Indent it on the function body instead, so that both for loops finish:
def f(N):
s1=0;
for p1 in range(N):
for p2 in range(N):
s1+=np.cos(p1+1)*np.cos(p2+1)
return s1
>>> print(f(5))
1.527247272700347
I have moved your code to a more numpy-ish version:
import numpy as np
N = 5
x = np.arange(N) + 1
y = np.arange(N) + 1
x = x.reshape((-1, 1))
y = y.reshape((1, -1))
mat = np.cos(x) * np.cos(y)
print(mat.sum()) # 1.5272472727003474
The trick here is to reshape x to a column and y to a row vector. If you multiply them, they are matched up like in your loop.
This should be more performant, since cos() is only called 2*N times. And it avoids loops (bad in python).
UPDATE (regarding your comment):
This pattern can be extended in any dimension. Basically, you get something like a crossproduct. Where every instance of x is matched up with every instance of y, z, u, k, ... Along the corresponding dimensions.
It's a bit confusing to describe, so here is some more code:
import numpy as np
N = 5
x = np.arange(N) + 1
y = np.arange(N) + 1
z = np.arange(N) + 1
x = x.reshape((-1, 1, 1))
y = y.reshape((1, -1, 1))
z = z.reshape((1, 1, -1))
mat = z**2 * np.cos(x) * np.cos(y)
# x along first axis
# y along second, z along third
# mat[0, 0, 0] == 1**2 * np.cos(1) * np.cos(1)
# mat[0, 4, 2] == 3**2 * np.cos(1) * np.cos(5)
If you use this for many dimensions, and big values for N, you will run into memory problems, though.
In the following code I have implemented Gaussian elimination with partial pivoting for a general square linear system Ax=b. I have tested my code and it produced the right output. However now I am trying to do the following but I am not quite sure how to code it, looking for some help with this!
I want to test my implementation by solving Ax=b where A is a random 100x100 matrix and b is a random 100x1 vector.
In my code I have put in the matrices
A = np.array([[3.,2.,-4.],[2.,3.,3.],[5.,-3.,1.]])
b = np.array([[3.],[15.],[14.]])
and gotten the following correct output:
[3. 1. 2.]
[3. 1. 2.]
but now how do I change it to generate the random matrices?
here is my code below:
import numpy as np
def GEPP(A, b, doPricing = True):
'''
Gaussian elimination with partial pivoting.
input: A is an n x n numpy matrix
b is an n x 1 numpy array
output: x is the solution of Ax=b
with the entries permuted in
accordance with the pivoting
done by the algorithm
post-condition: A and b have been modified.
'''
n = len(A)
if b.size != n:
raise ValueError("Invalid argument: incompatible sizes between"+
"A & b.", b.size, n)
# k represents the current pivot row. Since GE traverses the matrix in the
# upper right triangle, we also use k for indicating the k-th diagonal
# column index.
# Elimination
for k in range(n-1):
if doPricing:
# Pivot
maxindex = abs(A[k:,k]).argmax() + k
if A[maxindex, k] == 0:
raise ValueError("Matrix is singular.")
# Swap
if maxindex != k:
A[[k,maxindex]] = A[[maxindex, k]]
b[[k,maxindex]] = b[[maxindex, k]]
else:
if A[k, k] == 0:
raise ValueError("Pivot element is zero. Try setting doPricing to True.")
#Eliminate
for row in range(k+1, n):
multiplier = A[row,k]/A[k,k]
A[row, k:] = A[row, k:] - multiplier*A[k, k:]
b[row] = b[row] - multiplier*b[k]
# Back Substitution
x = np.zeros(n)
for k in range(n-1, -1, -1):
x[k] = (b[k] - np.dot(A[k,k+1:],x[k+1:]))/A[k,k]
return x
if __name__ == "__main__":
A = np.array([[3.,2.,-4.],[2.,3.,3.],[5.,-3.,1.]])
b = np.array([[3.],[15.],[14.]])
print (GEPP(np.copy(A), np.copy(b), doPricing = False))
print (GEPP(A,b))
You're already using numpy. Have you considered np.random.rand?
np.random.rand(m, n) will get you a random matrix with values in [0, 1). You can further process it by multiplying random values or rounding.
EDIT: Something like this
if __name__ == "__main__":
A = np.round(np.random.rand(100, 100)*10)
b = np.round(np.random.rand(100)*10)
print (GEPP(np.copy(A), np.copy(b), doPricing = False))
print (GEPP(A,b))
So I would use np.random.randint for this.
numpy.random.randint(low, high=None, size=None, dtype='l')
which outputs a size-shaped array of random integers from the appropriate distribution, or a single such random int if size not provided.
low is the lower bound of the ints you want in your range
high is one greater than the upper bound in your desired range
size is the dimensions of your output array
dtype is the dtype of the result
so if I was you I would write
A = np.random.randint(0, 11, (100, 100))
b = np.random.randint(0, 11, 100)
Basically you could create the desired matrices with ones and then iterate over them, setting each value to random.randint(0,100) for example.
Empty matrix with ones is:
one_array = np.ones((100, 100))
EDIT:
like:
for x in one_array.shape[0]:
for y in one_array.shape[1]:
one_array[x][y] = random.randint(0, 100)
A = np.random.normal(size=(100,100))
b = np.random.normal(size=(100,1))
x = np.linalg.solve(A,b)
assert max(abs(A#x - b)) < 1e-12
Clearly, you can use different distributions than normal, like uniform.
You can use numpy's native rand function:
np.random.rand()
In your code just define A and b as:
A = np.random.rand(100, 100)
b = np.random.rand(100)
This will generate 100x100 matrix and 100x1 vector (both numpy arrays) filled with random values between 0 and 1.
See the docs for this function to learn more.
I have a numpy array a of length n, which has the numbers 0 through n-1 shuffled in some way. I also have a numpy array mask of length <= n, containing some subset of the elements of a, in a different order.
The query I want to compute is "give me the elements of a that are also in mask in the order that they appear in a".
I had a similar question here, but the difference was that mask was a boolean mask instead of a mask on the individual elements.
I've outlined and tested 4 methods below:
import timeit
import numpy as np
import matplotlib.pyplot as plt
n_test = 100
n_coverages = 10
np.random.seed(0)
def method1():
return np.array([x for x in a if x in mask])
def method2():
s = set(mask)
return np.array([x for x in a if x in s])
def method3():
return a[np.in1d(a, mask, assume_unique=True)]
def method4():
bmask = np.full((n_samples,), False)
bmask[mask] = True
return a[bmask[a]]
methods = [
('naive membership', method1),
('python set', method2),
('in1d', method3),
('binary mask', method4)
]
p_space = np.linspace(0, 1, n_coverages)
for n_samples in [1000]:
a = np.arange(n_samples)
np.random.shuffle(a)
for label, method in methods:
if method == method1 and n_samples == 10000:
continue
times = []
for coverage in p_space:
mask = np.random.choice(a, size=int(n_samples * coverage), replace=False)
time = timeit.timeit(method, number=n_test)
times.append(time * 1e3)
plt.plot(p_space, times, label=label)
plt.xlabel(r'Coverage ($\frac{|\mathrm{mask}|}{|\mathrm{a}|}$)')
plt.ylabel('Time (ms)')
plt.title('Comparison of 1-D Intersection Methods for $n = {}$ samples'.format(n_samples))
plt.legend()
plt.show()
Which produced the following results:
So, binary mask, is, without a doubt, the fastest method of these 4 for any size of the mask.
My question is, is there a faster way?
So, binary mask, is, without a doubt, the fastest method of these 4 for any size of the mask.
My question is, is there a faster way?
I totally agree that binary mask method is the fastest one. I also don't think there could be any better ways in terms of computation complexity to do what you need.
Let me analyse your method time results:
Method running time is T = O(|a|*|mask|) time. Every element of a is checked to be present in mask by iterating over every its element. It gives O(|mask|) time per element in the worst case when element is missing in mask. |a| does not change,
consider it a constant.
|mask| = coverage * |a|
T = O(|a|2 * coverage)
Hence a linear dependency of coverage in plot. Note that running time has quadratic dependency of |a|. If |mask| ≤ |a| and |a| = n then T = O(n2)
Second method is using set. Set is a data-structure that performs operations of insertion/lookup in O(log(n)), where n is a number of elements in the set. s = set(mask) takes O(|mask|*log(|mask|)) to complete because there are |mask| insertion operations.
x in s is a lookup operation. So second row runs in O(|a|*log(|mask|))
Overall time complexity is O(|mask|*log(|mask|) + |a|*log(|mask|)). If |mask| ≤ |a| and |a| = n then T = O(n*log(n)). You probably observe f(x) = log(x) dependency on plot.
in1d runs in O(|mask|*log(|mask|) + |a|*log(|mask|)) as well. Same T = O(n*log(n)) complexity and f(x) = log(x) dependency on plot.
Time complexity is O(|a| + |mask|) which is T = O(n) and its the best. You observe constant dependency on plot. Algorithm simply iterates over a and mask arrays couple of times.
The thing is that if you have to output n items you will already have T = O(n) complexity. So this method 4 algorithm is optimal.
P.S. In order to observe mentioned f(n) dependencies you'd better vary |a| and let |mask| = 0.9*|a|.
EDIT: Looks like python set indeed performs lookup/insert in O(1) using hash table.
Assuming a is the bigger one.
def with_searchsorted(a, b):
sb = b.argsort()
bs = b[sb]
sa = a.argsort()
ia = np.arange(len(a))
ra = np.empty_like(sa)
ra[sa] = ia
ac = bs.searchsorted(ia) % b.size
return a[(bs[ac] == ia)[ra]]
demo
a = np.arange(10)
np.random.shuffle(a)
b = np.random.choice(a, 5, False)
print(a)
print(b)
[7 2 9 3 0 4 8 5 6 1]
[0 8 5 4 6]
print(with_searchsorted(a, b))
[0 4 8 5 6]
how it works
# sort b for faster searchsorting
sb = b.argsort()
bs = b[sb]
# sort a for faster searchsorting
sa = a.argsort()
# this is the sorted a... we just cheat because we know what it will be
ia = np.arange(len(a))
# construct the reverse sort look up
ra = np.empty_like(sa)
ra[sa] = ia
# perform searchsort
ac = bs.searchsorted(ia) % b.size
return a[(bs[ac] == ia)[ra]]
i currently implemented an algorithm which calculates a quality assesment of disparity maps based on total variation.
I'm relatively new to python, but already read numerous threads on speed up numpy code. Views vs Fancy indexing, tried using Cython, Vectorization of nested loops etc. I achieved a bit of speed up's but altogether, i ended more and more in messy code without achieving a proper speed up.
I wonder if someone can give me a hint if there is a clean and easy way to speed up this 2d loop.
TV is a 2D array with ~ 15k x 15k elements
footprint_ix and _iy are 2 lists of arrays which contain the index offset to the neighbor pixels if pixel x,y in a ringshaped manner. With m = 1 the 8 neighborpixels are selected, m = 2 the next 16, and so on
The algorithm sums up the neighbor pixels of x,y and increases m when a threshold TAU is not exceeded.
The best solution i come up with, so far uses row-wise multiprocessing.
# create footprints
footprint_ix = []
footprint_iy = []
for m in range(1, m_classes):
fp = np.ones((2 * m + 1, 2 * m + 1), dtype = int)
fp [ 1 : -1, 1 : -1] = 0
i, j = np.nonzero(fp)
i = i - m
j = j - m
footprint_ix.append(i)
footprint_iy.append(j)
m_classes = 21
for x in xrange( 0, rows):
for y in xrange ( 0, cols):
if disp[x,y] == np.inf:
continue
else:
tv_m = 0
for m_i in range (0, m_classes-1):
m = m_i + 1
try:
tv_m += np.sum( tv[footprint_ix[m_i] + x, footprint_iy[m_i] + y] ) / (8 * m)
except IndexError:
tv_m = np.inf
if tv_m >= TAU:
tv_classes[x,y] = m
break
if m == m_classes - 1:
tv_classes[x,y] = m