Given an n X m matrix with entries xi, j, the compositional variance is an m X m matrix, with the i, j entry including the expression
∑k = 1n [ ln2(xk, i / xk, j)]
(it includes other, easily calculated, expressions).
This is very easy to calculate in a loop, but how can it be calculated using vectorization?
Here is the crappy loop code:
x = np.array([[1, 2, 3], [4, 5, 6]], dtype=float)
v = np.zeros((3, 3))
for i in range(3):
for j in range(3):
for k in range(2):
v[i, j] += np.log(x[k, i] / x[k, j])**2
Assuming you meant something like (np.log(x[k, i] / x[k, j])**2) in NumPy terms, being summed over for k = 1:n, one vectorized approach could be suggested with broadcasting -
((np.log(x[:,:,None]/x[:,None])**2)).sum(0)
Related
I have a L x L matrix A, which I currently fill in using the following code:
A = np.zeros((L, L))
for J in range(X):
for a in range(L):
for b in range(L):
A[a][b] += alpha[J, a] * O[b, J] * A_old[a, b] * betas[J+2, b]
Where X is an integer defined elsewhere, alpha and betas is of shape (X, L), O is of shape (L, X) and A_old is of shape (L, L). I'm concerned about the speed of this code, and am trying to find a more numpythonic way to approach filling in this matrix. My instinct is to do something like:
for J in range(X):
A += alpha[J, :] * O[:, J] * A_old[:, :] * betas[J+2, :]
But this doesn't broadcast the operations correctly because of the A_old matrix (the resulting shape is right, but the values are not). What's a good way to condense this loop using numpy?
Roughly I want to convert this (non-numpy) for-loop:
N = len(left)
M = len(right)
matrix = np.zeros(N, M)
for i in range(N):
for j in range(M):
matrix[i][j] = scipy.stats.binom.pmf(left[i], C, right[j])
It's sort of like a dot product but of course mathematically not a dot product. How would I normally vectorize or make something like this pythonic/numpythonic?
scipy.stats.binom.pmf already is vectorized. However, you have to broadcast your inputs in order to get your desired result.
broadcast_out = scipy.stats.binom.pmf(left[:, None], C, right)
Validation
np.random.seed(314)
left = np.arange(5, dtype=float)
right = np.random.rand(5)
C = 5
broadcast_out = scipy.stats.binom.pmf(left[:, None], C, right)
N = len(left)
M = len(right)
matrix = np.zeros((N, M))
for i in range(N):
for j in range(M):
matrix[i][j] = scipy.stats.binom.pmf(left[i], C, right[j])
print(np.array_equal(matrix, broadcast_out))
True
I have an m by n matrix A, implemented as a numpy array.
import numpy as np
m = 10
n = 7
A = np.random.rand(m, n)
I want to compute the m by m matrix B whose entries are
B[i, j] = sum_{k=1,...,n} sum_{l=1,...,n} A[i, k] * A[j, l]
What is the easiest way to do this without making explicit for loops?
Notice that the sum over k in your expression only affects the first factor, while the sum over l only involves the second:
sum_{k=1,...,n} sum_{l=1,...,n} A[i, k] * A[j, l] =
(sum_{k=1,...,n} A[i, k]) * (sum_{l=1,...,n} A[j, l])
The expressions in parentheses are, except for the names of the indices, the same, so define
sA = np.sum(A, axis=1)
Then your B is the so-called outer product of sA with itself:
B = np.outer(sA, sA)
I'd like to transform a tensor T of size (n x n x m x m) into a tensor U of size (n x m x m) while only retreiving the diagonal elements of T over the (NxN) chunks (i.e. Uikl=Tiikl). torch.diag() only works with 2-D tensors and I really fail to see how to do this without looping on the indexes of the elements (which I'd like to avoid given that I think that it is inefficient computationnally). In clear, I'd like to vectorize the following code:
U = torch.zeros(n, m, m)
for i in range(n):
for k in range(m):
for l in range(m):
U[i][k][l] = T[i][i][k][l]
I'm totally new to pytorch and I tried many combination of functions but none of them gives me a satisfying result. Has anyone an idea?
You can generate the indexes using np.meshgrid
i, k, l = np.meshgrid(range(n), range(m), range(m))
U[i, k, l] = T[i, i, k, l]
for completeness I did:
n = 3
m = 5
T = torch.arange(n * n * m * m).view(n, n, m, m)
U = torch.zeros(n, m, m)
U_ = torch.zeros(n, m, m)
i, k, l = np.meshgrid(range(n), range(m), range(m))
U_[i, k, l] = T[i, i, k, l]
for i in range(n):
for k in range(m):
for l in range(m):
U[i][k][l] = T[i][i][k][l]
U = U.view(-1)
U_ = U_.view(-1)
print ((U == U_).all())
The output is True so I assume it is correct.
When applied to 2d matrices, torch.diag() is an alias for torch.diagonal().
diagonal itself allows you to specify which two dimensions of an arbitrary rank tensor the diagonal is taken from, by default these are 0 and 1:
U = T.diagonal()
The following problem concerns evaluating many monomials (x**k * y**l * z**m) at many points.
I would like to compute the "inner power" of two numpy arrays, i.e.,
import numpy
a = numpy.random.rand(10, 3)
b = numpy.random.rand(3, 5)
out = numpy.ones((10, 5))
for i in range(10):
for j in range(5):
for k in range(3):
out[i, j] *= a[i, k]**b[k, j]
print(out.shape)
If instead the line would read
out[i, j] += a[i, k]*b[j, k]
this would be a a number of inner products, computable with a simple dot or einsum.
Is it possible to perform the above loop in just one numpy line?
What about thinking of it in terms of logarithms:
import numpy
a = numpy.random.rand(10, 3)
b = numpy.random.rand(3, 5)
out = np.exp(np.matmul(np.log(a), b))
Since c_ij = prod(a_ik ** b_kj, k=1..K), then log(c_ij) = sum(log(a_ik) * b_ik, k=1..K).
Note: Having zeros in a may mess up the result (also negatives, but then the result wouldn't be well defined anyway). I have given it a try and it doesn't seem to actually break somehow; I don't know if that behavior is guaranteed by NumPy but, to be safe, you can add something at the end like:
out[np.logical_or.reduce(a < eps, axis=1)] = 0
You can use broadcasting after extending those arrays to 3D versions -
(a[:,:,None]**b[None,:,:]).prod(axis=1)
Simply put -
(a[...,None]**b[None]).prod(1)
Basically, we are keeping the last axis and first axis from the two arrays aligned, while performing element-wise powers between the first and last axes from the two inputs. Schematically put using the given sample on shapes -
10 x 3 x 1
1 x 3 x 5
Two more solutions:
Inlining
numpy.array([
numpy.prod([a[:, i]**bb[i] for i in range(len(bb))], axis=0)
for bb in b.T
]).T
and using power.outer:
numpy.prod([numpy.power.outer(a[:, k], b[k]) for k in range(len(b))], axis=0)
Both are a bit slower than the broadcasting solution.
Even with some logic to accommodate for zero and negative values, the exp-log solution takes the cake.
Code to reproduce the plot:
import numpy
import perfplot
def loop(data):
a, b = data
m = a.shape[0]
n = b.shape[1]
out = numpy.ones((m, n))
for i in range(m):
for j in range(n):
for k in range(3):
out[i, j] *= a[i, k]**b[k, j]
return out
def broadcasting(data):
a, b = data
return (a[..., None]**b[None]).prod(1)
def log_exp(data):
a, b = data
neg_a = numpy.zeros(a.shape, dtype=int)
neg_a[a < 0.0] = 1
odd_b = numpy.zeros(b.shape, dtype=int)
odd_b[b % 2 == 1] = 1
negative_count = numpy.dot(neg_a, odd_b)
out = (-1)**negative_count * numpy.exp(
numpy.matmul(
numpy.log(abs(a), where=abs(a) > 0.0),
b
))
zero_a = numpy.zeros(a.shape, dtype=int)
zero_a[a == 0.0] = 1
pos_b = numpy.zeros(b.shape, dtype=int)
pos_b[b > 0] = 1
zero_count = numpy.dot(zero_a, pos_b)
out[zero_count > 0] = 0.0
return out
def inline(data):
a, b = data
return numpy.array([
numpy.prod([a[:, i]**bb[i] for i in range(len(bb))], axis=0)
for bb in b.T
]).T
def outer_power(data):
a, b = data
return numpy.prod([
numpy.power.outer(a[:, k], b[k]) for k in range(len(b))
], axis=0)
perfplot.show(
setup=lambda n: (
numpy.random.rand(n, 3) - 0.5,
numpy.random.randint(0, 10, (3, n))
),
n_range=[2**k for k in range(11)],
repeat=10,
kernels=[
loop,
broadcasting,
inline,
log_exp,
outer_power
],
logx=True,
logy=True,
xlabel='len(a)',
)
import numpy
a = numpy.random.rand(10, 3)
b = numpy.random.rand(3, 5)
out = [[numpy.prod([a[i, k]**b[k, j] for k in range(3)]) for j in range(5)] for i in range(10)]