input is a list of observations, every observation is fixed size set of elipses (every elipse is represented by 7 parameters).
output is a list of images, one image for one observation, we are basically putting elipses from observation to completely white image. If few elipses overlap then we are putting mean value of rgb values.
n, m = size of image in pixels, image is represented as (n, m, 3) numpy array (3 because of RGB coding)
N = number of elipses in every individual observation
xx, yy = np.mgrid[:n, :m]
def elipses_population_to_img_population(elipses_population):
population_size = elipses_population.shape[0]
img_population = np.empty((population_size, n, m, 3))
for j in range(population_size):
imarray = np.empty((N, n, m, 3))
imarray.fill(np.nan)
for i in range(N):
x = elipses_population[j, i, 0]
y = elipses_population[j, i, 1]
R = elipses_population[j, i, 2]
G = elipses_population[j, i, 3]
B = elipses_population[j, i, 4]
a = elipses_population[j, i, 5]
b = elipses_population[j, i, 6]
xx_centered = xx - x
yy_centered = yy - y
elipse = (xx_centered / a)**2 + (yy_centered / b)**2 < 1
imarray[i, elipse, :] = np.array([R, G, B])
means_img = np.nanmean(imarray, axis=0)
means_img = np.nan_to_num(means_img, nan=255)
img_population[j, :, :, :] = means_img
return img_population
Code is working correctly, but i am looking for optimization advices. I am running it many times in my code so every small improve would be helpful.
The goal is to extract a random 2x5 patch from a 5x10 image, and do so randomly for all images in a batch. Looking to write a faster implementation that avoids for loops. Haven't been able to figure out how to use the torch .gather operation with two index arrays (idx_h and idx_w in code example).
Naive for loop:
import torch
b = 3 # batch size
h = 5 # height
w = 10 # width
crop_border = (3, 5) # number of pixels (height, width) to crop
x = torch.arange(b * h * w).reshape(b, h, w)
print(x)
dh_ = torch.randint(0, crop_border[0], size=(b,))
dw_ = torch.randint(0, crop_border[1], size=(b,))
_dh = h - (crop_border[0] - dh_)
_dw = w - (crop_border[1] - dw_)
idx_h = torch.stack([torch.arange(d_, _d) for d_, _d in zip(dh_, _dh)])
idx_w = torch.stack([torch.arange(d_, _d) for d_, _d in zip(dw_, _dw)])
print(idx_h, idx_w)
new_shape = (b, idx_h.shape[1], idx_w.shape[1])
cropped_x = torch.empty(new_shape)
for batch in range(b):
for height in range(idx_h.shape[1]):
for width in range(idx_w.shape[1]):
cropped_x[batch, height, width] = x[
batch, idx_h[batch, height], idx_w[batch, width]
]
print(cropped_x)
Index arrays needed to be repeated and reshaped to work with gather operation. Fast_crop code based pytorch discussion: https://discuss.pytorch.org/t/similar-to-torch-gather-over-two-dimensions/118827
def fast_crop(x, idx1, idx2):
"""
Compute
x: N x B x V
idx1: N x K matrix where idx1[i, j] is between [0, B)
idx2: N x K matrix where idx2[i, j] is between [0, V)
Return:
cropped: N x K matrix where y[i, j] = x[i, idx1[i,j], idx2[i,j]]
"""
x = x.contiguous()
assert idx1.shape == idx2.shape
lin_idx = idx2 + x.size(-1) * idx1
x = x.view(-1, x.size(1) * x.size(2))
lin_idx = lin_idx.view(-1, lin_idx.shape[1] * lin_idx.shape[2])
cropped = x.gather(-1, lin_idx)
return cropped.reshape(idx1.shape)
idx1 = torch.repeat_interleave(idx_h, idx_w.shape[1]).reshape(new_shape)
idx2 = torch.repeat_interleave(idx_w, idx_h.shape[1], dim=0).reshape(new_shape)
cropped = fast_crop(x, idx1, idx2)
(cropped == cropped_x).all()
Using realistic numbers for b = 100, h = 100, w = 130 and crop_border = (40, 95), a 10 trial run takes the for loop 32s while fast_crop only 0.043s.
I am trying to speed up some multi-camera system that relies on calculation of fundamental matrices between each camera pair.
Please notice the following is pseudocode. # means matrix multiplication, | means concatenation.
I have code to calculate F for each pair calculate_f(camera_matrix1_3x4, camera_matrix1_3x4), and the naiive solution is
for c1 in cameras:
for c2 in cameras:
if c1 != c2:
f = calculate_f(c1.proj_matrix, c2.proj_matrix)
This is slow, and I would like to speed it up. I have ~5000 cameras.
I have pre calculated all rotations and translations (in world coordinates) between every pair of cameras, and internal parameters k, such that for each camera c, it holds that c.matrix = c.k # (c.rot | c.t)
Can I use the parameters r, t to help speed up following calculations for F?
In mathematical form, for 3 different cameras c1, c2, c3 I have
f12=(c1.proj_matrix, c2.proj_matrix), and I want f23=(c2.proj_matrix, c3.proj_matrix), f13=(c1.proj_matrix, c3.proj_matrix) with some function f23, f13 = fast_f(f12, c1.r, c1.t, c2.r, c2.t, c3.r, c3.t)?
A working function for calculating the fundamental matrix in numpy:
def fundamental_3x3_from_projections(p_left_3x4: np.array, p_right__3x4: np.array) -> np.array:
# The following is based on OpenCv-contrib's c++ implementation.
# see https://github.com/opencv/opencv_contrib/blob/master/modules/sfm/src/fundamental.cpp#L109
# see https://sourishghosh.com/2016/fundamental-matrix-from-camera-matrices/
# see https://answers.opencv.org/question/131017/how-do-i-compute-the-fundamental-matrix-from-2-projection-matrices/
f_3x3 = np.zeros((3, 3))
p1, p2 = p_left_3x4, p_right__3x4
x = np.empty((3, 2, 4), dtype=np.float)
x[0, :, :] = np.vstack([p1[1, :], p1[2, :]])
x[1, :, :] = np.vstack([p1[2, :], p1[0, :]])
x[2, :, :] = np.vstack([p1[0, :], p1[1, :]])
y = np.empty((3, 2, 4), dtype=np.float)
y[0, :, :] = np.vstack([p2[1, :], p2[2, :]])
y[1, :, :] = np.vstack([p2[2, :], p2[0, :]])
y[2, :, :] = np.vstack([p2[0, :], p2[1, :]])
for i in range(3):
for j in range(3):
xy = np.vstack([x[j, :], y[i, :]])
f_3x3[i, j] = np.linalg.det(xy)
return f_3x3
Numpy is clearly not optimized for working on small matrices. The parsing of CPython input objects, internal checks and function calls introduce a significant overhead which is far bigger than the execution time need to perform the actual computation. Not to mention the creation of many temporary arrays is also expensive. One solution to solve this problem is to use Numba or Cython.
Moreover, the computation of the determinant can be optimized a lot since you know the exact size of the matrix and a part of the matrix does not always change. Indeed, using a basic algebraic expression for the 4x4 determinant help compilers to optimize a lot the overall computation thanks to the common sub-expression elimination (not performed by the CPython interpreter) and the removal of complex loops/conditionals in np.linalg.det.
Here is the resulting code:
import numba as nb
#nb.njit('float64(float64[:,::1])')
def det_4x4(mat):
a, b, c, d = mat[0,0], mat[0,1], mat[0,2], mat[0,3]
e, f, g, h = mat[1,0], mat[1,1], mat[1,2], mat[1,3]
i, j, k, l = mat[2,0], mat[2,1], mat[2,2], mat[2,3]
m, n, o, p = mat[3,0], mat[3,1], mat[3,2], mat[3,3]
return a * (f * (k*p - l*o) + g * (l*n - j*p) + h * (j*o - k*n)) + \
b * (e * (l*o - k*p) + g * (i*p - l*m) + h * (k*m - i*o)) + \
c * (e * (j*p - l*n) + f * (l*m - i*p) + h * (i*n - j*m)) + \
d * (e * (k*n - j*o) + f * (i*o - k*m) + g * (j*m - i*n))
#nb.njit('float64[:,::1](float64[:,::1], float64[:,::1])')
def fundamental_3x3_from_projections(p_left_3x4, p_right_3x4):
f_3x3 = np.empty((3, 3))
p1, p2 = p_left_3x4, p_right_3x4
x = np.empty((3, 2, 4), dtype=np.float64)
x[0, 0, :] = p1[1, :]
x[0, 1, :] = p1[2, :]
x[1, 0, :] = p1[2, :]
x[1, 1, :] = p1[0, :]
x[2, 0, :] = p1[0, :]
x[2, 1, :] = p1[1, :]
y = np.empty((3, 2, 4), dtype=np.float64)
y[0, 0, :] = p2[1, :]
y[0, 1, :] = p2[2, :]
y[1, 0, :] = p2[2, :]
y[1, 1, :] = p2[0, :]
y[2, 0, :] = p2[0, :]
y[2, 1, :] = p2[1, :]
xy = np.empty((4, 4), dtype=np.float64)
for i in range(3):
xy[2:4, :] = y[i, :, :]
for j in range(3):
xy[0:2, :] = x[j, :, :]
f_3x3[i, j] = det_4x4(xy)
return f_3x3
This is 130 times faster on my machine (85.6 us VS 0.66 us).
You can speed up the process even more by a factor of two if the applied function is commutative (ie. f(c1, c2) == f(c2, c1)). If so, you could compute only the upper part. It turns out that your function have some interesting property since f(c1, c2) == f(c2, c1).T appear to be always true. Another possible optimization is to run the loop in parallel.
With all these optimizations, the resulting program should be about 3 order of magnitude faster.
Analysis of the accuracy of the approach
The precision provided appear to be similar than the original one. Regarding the input matrix, results are sometime more accurate and sometimes less accurate than the Numpy method. This is specifically due to the computation of the determinant. With 24-digit decimals, there is no visible error compared to the reliable result of Wolphram Alpha. This show that the method is correct, results as not the same due to numerical stability details. Here is the code used to test the accuracy of the methods:
# Imports
from decimal import Decimal
import numba as nb
# Definitions
def det_4x4(mat):
a, b, c, d = mat[0,0], mat[0,1], mat[0,2], mat[0,3]
e, f, g, h = mat[1,0], mat[1,1], mat[1,2], mat[1,3]
i, j, k, l = mat[2,0], mat[2,1], mat[2,2], mat[2,3]
m, n, o, p = mat[3,0], mat[3,1], mat[3,2], mat[3,3]
return a * (f * (k*p - l*o) + g * (l*n - j*p) + h * (j*o - k*n)) + \
b * (e * (l*o - k*p) + g * (i*p - l*m) + h * (k*m - i*o)) + \
c * (e * (j*p - l*n) + f * (l*m - i*p) + h * (i*n - j*m)) + \
d * (e * (k*n - j*o) + f * (i*o - k*m) + g * (j*m - i*n))
#nb.njit('float64(float64[:,::1])')
def det_4x4_numba(mat):
a, b, c, d = mat[0,0], mat[0,1], mat[0,2], mat[0,3]
e, f, g, h = mat[1,0], mat[1,1], mat[1,2], mat[1,3]
i, j, k, l = mat[2,0], mat[2,1], mat[2,2], mat[2,3]
m, n, o, p = mat[3,0], mat[3,1], mat[3,2], mat[3,3]
return a * (f * (k*p - l*o) + g * (l*n - j*p) + h * (j*o - k*n)) + \
b * (e * (l*o - k*p) + g * (i*p - l*m) + h * (k*m - i*o)) + \
c * (e * (j*p - l*n) + f * (l*m - i*p) + h * (i*n - j*m)) + \
d * (e * (k*n - j*o) + f * (i*o - k*m) + g * (j*m - i*n))
# Example matrix
precise_xy = np.array(
[[Decimal('42'),Decimal('-6248'),Decimal('4060'),Decimal('845')],
[Decimal('-0.00992'),Decimal('-0.704'),Decimal('-0.71173298417'),Decimal('300.532')],
[Decimal('-8.94274'),Decimal('-7554.39'),Decimal('604.57'),Decimal('706282')],
[Decimal('-0.0132'),Decimal('-0.2757'),Decimal('-0.961'),Decimal('247.65')]]
)
xy = precise_xy.astype(np.float64)
res_numpy = Decimal(np.linalg.det(xy))
res_numba = Decimal(det_4x4_numba(xy))
res_precise = det_4x4(precise_xy)
# The Wolphram Alpha expression used is:
# det({{42,-6248,4060,845},
# {-0.00992,-0.704,-0.71173298417,300.532},
# {-8.94274,-7554.39,604.57,706282},
# {-0.0132,-0.2757,-0.961,247.65}})
res_wolframalpha = Decimal('-323312.2164828991329828243')
# The result got from Wolfram-Alpha have a 25-digit precision
# and is exactly the same than the one of det_4x4 using 24-digit decimals.
assert res_precise == res_wolframalpha
print(abs((res_numpy-res_precise)/res_precise)) # 1.7E-14
print(abs((res_numba-res_precise)/res_precise)) # 3.1E-14
# => Similar relative error (Numba slightly less accurate
# but both are not close to the 1e-16 relative epsilon)
So I have this 3x3 G matrix (not shown here, it's irrelevant to my problem) that I created using the two variables u (a vector, x - y) and the scalar k. x_j = (x_1 (j), x_2 (j), x_3 (j)) and y_j = (y_1 (j), y_2 (j), y_3 (j)). alpha_j is a 3x3 matrix. The A matrix is block diagonal matrix of size 3nx3n. I am having trouble with the W matrix. How do I code a matrix of size 3nx3n, where the (i,j)th block is the 3x3 matrix given by alpha_i*G_[ij]*alpha_j?? I am lost.
My alpha_j matrix also seems to be having some trouble. The loop keeps throwing me the error, "only length-1 arrays can be converted to Python scalars." pls help :/
def W(x, y, k, alpha, A):
u = x - y
n = x.shape[0]
W = np.zeros((3*n, 3*n))
for i in range(0, n-1):
for j in range(0, n-1):
#u = -np.array([[x[i,0] - x[j,0]], [x[i,1] - x[j,1]], [0]]) ??
W[i][j] = (alpha_j(alpha, A) * G(u, k) * alpha_j(alpha, A))
W[i][i] = np.zeros((n, n))
return W
def alpha_j(a, A):
alph = np.array([[0,0,0],[0,0,0],[0,0,0]],complex)
rho = np.random.rand(3,1)
for i in range(0, 2):
for j in range(0, 2):
alph[i][j] = (rho[i] * a * A[i][j])
return alph
#-------------------------------------------------------------------
x1 = np.array([[1], [2], [0]])
y1 = np.array([[4], [5], [0]])
# SYSTEM PARAMETERS
# incoming Wave angle
theta = 0 # can range from [0, 2pi)
# susceptibility
chi = 10 + 1j
# wavelength
lam = 0.5 # microns (values between .4-.7)
# frequency
k = (2 * np.pi)/lam # 1/microns
# volume
V_0 = (0.05)**3 # microns^3
# incoming wave vector
K = k * np.array([[0], [np.sin(theta)], [np.cos(theta)]])
# polarization vector
vecinc = np.array([[1], [0], [0]]) # (can choose any vector perpendicular to K)
# for the fixed alpha case
alpha = (V_0 * 3 * chi)/(chi + 3)
# 3 x 3 matrix
A = np.matlib.identity(3) # could be any symmetric matrix,
#-------------------------------------------------------------------
# TEST FUNCTIONS
test = G((x1-y1), k)
print(test)
w = W(x1, y1, k, alpha, A)
print(w)
Sometimes my W loops throws me the error, "can't set an array element with a sequence." But I need to set each array element in this arbitrary matrix W to the 3x3 matrix created by multiplying alpha by G...
To your question of how to create a new array with a block for each element, the following should do the trick:
G = np.random.random([3,3])
result = np.zeros([9,9])
num_blocks = 3
a = np.random.random([3,3])
b = np.random.random([3,3])
for i in range(G.shape[0]):
for j in range(G.shape[1]):
block_result = a*G[i,j]*b
for k in range(num_blocks):
for l in range(num_blocks):
result[3*i + k, 3*j + l] = block_result[i, j]
You should be able to generalize from there. I hope I've understood correctly.
EDIT: It looks like I haven't understood correctly. I'm leaving it in hopes it spurs you to an answer. The general idea is to generate ranges of indices to operate on, and then just operate on them directly. Slicing might be helpful, too.
Ah, you asked how to create a diagonal filled with blocks. In that case:
num_diagonal_blocks = 3 # for example
for block_dim in range(num_diagonal_blocks)
# do your block calculation...
for k in range(G.shape[0]):
for l in range(G.shape[1]):
result[3*block_dim + k, 3*block_dim + l] = # assign to element of block
I think that's nearly it.
I'm trying to implement non-negative matrix factorization using the Kullback-Liebler divergence as a similarity measure. The algorithm is described in: http://hebb.mit.edu/people/seung/papers/nmfconverge.pdf. Below is my python / numpy implementation, with an example matrix to run it on.
In a nutshell, the algorithm is supposed to learn matrices W(n by r) and H(r by m) such that V(n by m) is approximately WH. You start with random values in W and H, and by following the update rules described in the Seung and Lee paper, you're supposed to get closer and closer to good approximations for W and H.
The algorithm is proven to monotonically reduce the divergence measure, but that's not what happens in my implementation. Instead, it settles into an alternation between two divergence values. If you look at W and H, you can see that the resulting factorization is not particularly good.
I've wondered whether to use the updated or old H when calculating the update for W. I tried it both ways, and it doesn't change the behavior of the implementation.
I've checked my implementation against the paper a bunch of times, and I don't see what I'm doing wrong. Can anyone shed some light on the issue?
import numpy as np
def update(V, W, H, r, n, m):
n,m = V.shape
WH = W.dot(H)
# equation (5)
H_coeff = np.zeros(H.shape)
for a in range(r):
for mu in range(m):
for i in range(n):
H_coeff[a, mu] += W[i, a] * V[i, mu] / WH[i, mu]
H_coeff[a, mu] /= sum(W)[a]
H = H * H_coeff
W_coeff = np.zeros(W.shape)
for i in range(n):
for a in range(r):
for mu in range(m):
W_coeff[i, a] += H[a, mu] * V[i, mu] / WH[i, mu]
W_coeff[i, a] /= sum(H.T)[a]
W = W * W_coeff
return W, H
def factor(V, r, iterations=100):
n, m = V.shape
avg_V = sum(sum(V))/n/m
W = np.random.random(n*r).reshape(n,r)*avg_V
H = np.random.random(r*m).reshape(r,m)*avg_V
for i in range(iterations):
WH = W.dot(H)
divergence = sum(sum(V * np.log(V/WH) - V + WH)) # equation (3)
print "At iteration " + str(i) + ", the Kullback-Liebler divergence is", divergence
W,H = update(V, W, H, r, n, m)
return W, H
V = np.arange(0.01,1.01,0.01).reshape(10,10)
W, H = factor(V, 6)
How to eliminate the alternation effect:
The very last line of the Proof of Theorem 2 reads,
By reversing the roles of H and W, the update rule for W can similarly
be shown to be nonincreasing.
Thus we can surmise that updating H can be done independently of updating W. That means after updating H:
H = H * H_coeff
we should also update the intermediate value WH before updating W:
WH = W.dot(H)
W = W * W_coeff
Both updates decrease the divergence.
Try it: Just stick WH = W.dot(H) before the computation for W_coeff, and the alternation effect goes away.
Simplifying the code:
When dealing with NumPy arrays, use their mean and sum methods, and avoid using the Python sum function:
avg_V = sum(sum(V))/n/m
can be written as
avg_V = V.mean()
and
divergence = sum(sum(V * np.log(V/WH) - V + WH)) # equation (3)
can be written as
divergence = ((V * np.log(V_over_WH)) - V + WH).sum()
Avoid the Python builtin sum function because
it is slower than the NumPy sum method, and
it is not as versatile as the NumPy sum method. (It
does not allow you to specify the axis on which to sum. We managed to eliminate two calls to Python's sum with one call to NumPy's sum or mean method.)
Eliminate the triple for-loop:
But a bigger improvement in both speed and readability can be had by replacing
H_coeff = np.zeros(H.shape)
for a in range(r):
for mu in range(m):
for i in range(n):
H_coeff[a, mu] += W[i, a] * V[i, mu] / WH[i, mu]
H_coeff[a, mu] /= sum(W)[a]
H = H * H_coeff
with
V_over_WH = V/WH
H *= (np.dot(V_over_WH.T, W) / W.sum(axis=0)).T
Explanation:
If you look at the equation 5 update rule for H, first notice that indices for V and (W H) are identical. So you can replace V / (W H) with
V_over_WH = V/WH
Next, note that in the numerator we are summing over the index i, which is the first index in both W and V_over_WH. We can express that as matrix multiplication:
np.dot(V_over_WH.T, W).T
And the denominator is simply:
W.sum(axis=0).T
If we divide the numerator and denominator
(np.dot(V_over_WH.T, W) / W.sum(axis=0)).T
we get a matrix indexed by the two remaining indices, alpha and mu, in that order. That is the same as the indices for H. So we want to multiply H by this ratio element-wise. Perfect. NumPy multiplies arrays element-wise by default.
Thus, we can express the entire update rule for H as
H *= (np.dot(V_over_WH.T, W) / W.sum(axis=0)).T
So, putting it all together:
import numpy as np
np.random.seed(1)
def update(V, W, H, WH, V_over_WH):
# equation (5)
H *= (np.dot(V_over_WH.T, W) / W.sum(axis=0)).T
WH = W.dot(H)
V_over_WH = V / WH
W *= np.dot(V_over_WH, H.T) / H.sum(axis=1)
WH = W.dot(H)
V_over_WH = V / WH
return W, H, WH, V_over_WH
def factor(V, r, iterations=100):
n, m = V.shape
avg_V = V.mean()
W = np.random.random(n * r).reshape(n, r) * avg_V
H = np.random.random(r * m).reshape(r, m) * avg_V
WH = W.dot(H)
V_over_WH = V / WH
for i in range(iterations):
W, H, WH, V_over_WH = update(V, W, H, WH, V_over_WH)
# equation (3)
divergence = ((V * np.log(V_over_WH)) - V + WH).sum()
print("At iteration {i}, the Kullback-Liebler divergence is {d}".format(
i=i, d=divergence))
return W, H
V = np.arange(0.01, 1.01, 0.01).reshape(10, 10)
# V = np.arange(1,101).reshape(10,10).astype('float')
W, H = factor(V, 6)