Double dot product with broadcasting in numpy - python

I have the following operation :
import numpy as np
x = np.random.rand(3,5,5)
w = np.random.rand(5,5)
y=np.zeros((3,5,5))
for i in range(3):
y[i] = np.dot(w.T,np.dot(x[i],w))
Which corresponds to the pseudo-expression y[m,i,j] = sum( w[k,i] * x[m,k,l] * w[l,j], axes=[k,l] or equivalently simply the dot product of w.T , x, w broadcaster over the first dimension of x.
How can I implement it with numpy's broadcasting rules ?
Thanks in advance.

Here's one vectorized approach with np.tensordot which should be better than broadcasting + summation anyday -
# Take care of "np.dot(x[i],w)" term
x_w = np.tensordot(x,w,axes=((2),(0)))
# Perform "np.dot(w.T,np.dot(x[i],w))" : "np.dot(w.T,x_w)"
y_out = np.tensordot(x_w,w,axes=((1),(0))).swapaxes(1,2)
Alternatively, all of the mess being taken care of with one np.einsum call, but could be slower -
y_out = np.einsum('ab,cae,eg->cbg',w,x,w)
Runtime test -
In [114]: def tensordot_app(x, w):
...: x_w = np.tensordot(x,w,axes=((2),(0)))
...: return np.tensordot(x_w,w,axes=((1),(0))).swapaxes(1,2)
...:
...: def einsum_app(x, w):
...: return np.einsum('ab,cae,eg->cbg',w,x,w)
...:
In [115]: x = np.random.rand(30,50,50)
...: w = np.random.rand(50,50)
...:
In [116]: %timeit tensordot_app(x, w)
1000 loops, best of 3: 477 µs per loop
In [117]: %timeit einsum_app(x, w)
1 loop, best of 3: 219 ms per loop
Giving the broadcasting a chance
The sum-notation was -
y[m,i,j] = sum( w[k,i] * x[m,k,l] * w[l,j], axes=[k,l] )
Thus, the three terms would be stacked for broadcasting, like so -
w : [ N x k x i x N x N]
x : [ m x k x N x l x N]
w : [ N x N X N x l x j]
, where N represents new-axis being appended to facilitate broadcasting along those dims.
The terms with new axes being added with None/np.newaxis would then look like this -
w : w[None, :, :, None, None]
x : x[:, :, None, :, None]
w : w[None, None, None, :, :]
Thus, the broadcasted product would be -
p = w[None,:,:,None,None]*x[:,:,None,:,None]*w[None,None,None,:,:]
Finally, the output would be sum-reduction to lose (k,l), i.e. axes =(1,3) -
y = p.sum((1,3))

Related

How to substitute 2D IndexedBase variable in Sympy

Can I substitute a 2D IndexedBase that is in an expression for a numpy array using sympy.subs() or sympy.evalf(subs=())?
So something like:
i, j, m, n = sp.symbols('i j m n', integer=True)
x = sp.IndexedBase('x')
a = sp.IndexedBase('a')
b = sp.IndexedBase('b')
f = sp.ln(sp.Sum(sp.exp(sp.Sum(a[i, j]*x[j]+b[i], (j, 1, n))), (i, 1, m)))
which currently outputs:
expression output
Note that A is a 2D IndexedBase and B is a 1D IndexedBase. I would like to be able to substitute the IndexedBases defined above for numpy arrays, something like
A = np.random.uniform(MIN, MAX, (M, N))
B = np.random.uniform(MIN, MAX, M)
f.evalf(subs={a:A, b:B, x:(1,5), m:M, n:N}
How would I be able to achieve something like this?
You can do this with lambdify like this:
In [36]: i, j, m, n = sp.symbols('i j m n', integer=True)
...: x = sp.IndexedBase('x')
...: a = sp.IndexedBase('a')
...: b = sp.IndexedBase('b')
...:
...: f = sp.ln(sp.Sum(sp.exp(sp.Sum(a[i, j]*x[j]+b[i], (j, 0, n-1))), (i, 0, m-1)))
In [37]: f_lam = lambdify((a, b, x, m, n), f, 'numpy')
In [38]: MIN, MAX, M, N = 0, 1, 3, 4
In [39]: A = np.random.uniform(MIN, MAX, (M, N))
...: B = np.random.uniform(MIN, MAX, M)
In [40]: X = np.random.uniform(MIN, MAX, N)
In [41]: f_lam(A, B, X, M, N)
Out[41]: 4.727558334863294
Note that I shifted the limits of the Sum go from e.g. 0 to n-1 to match up with numpy indexing rules.
Also note that this uses the ordinary Python sum and a double-loop generator expression rather than more efficient numpy operations:
In [42]: import inspect
In [44]: print(inspect.getsource(f_lam))
def _lambdifygenerated(Dummy_34, Dummy_33, Dummy_32, m, n):
return log((builtins.sum(exp((builtins.sum(Dummy_32[j]*Dummy_34[i, j] + Dummy_33[i] for j in range(0, n - 1+1)))) for i in range(0, m - 1+1))))
The generated code is more efficient if you set this up using matrix symbols rather than explicit expressions involving IndexedBase:
In [22]: m, n = symbols('m, n')
In [23]: a = MatrixSymbol('a', m, n)
In [24]: b = MatrixSymbol('b', m, 1)
In [25]: x = MatrixSymbol('x', n, 1)
In [26]: one = OneMatrix(1, m)
In [28]: f = log(one*(a*x + b))
In [29]: f_lam = lambdify((a, b, x, m, n), f, 'numpy')
In [30]: f_lam(A, B, X, M, N)
Out[30]: array([1.52220638])
In [33]: print(inspect.getsource(f_lam))
def _lambdifygenerated(a, b, x, m, n):
return log((ones((1, m))).dot((a).dot(x) + b))

Vectorize the following python code?

I am trying to vectorize the following operations with two matrix in python.
f= matrix([[ 96],
[192],
[288],
[384]], dtype=int32)
g = matrix([[ 0.],
[ 70.],
[ 200.],
[ 60.]])
Need to create z without creating loops such that z is maximum of cumulative sum of first column and sum of last value of z and another matrix g. This loop is called thousands of time, therefore slowing the run time.
for i in range(4):
if i != 0:
z[i] = max(f[i], z[i-1] + g[i])
else:
z[0] = f[i]
Any guidance on how to vectorize this code would be really helpful.
Thanks in advance.
Here is a vectorized version. It uses the cumulative maximum on the difference between f and cumsum(g) to predict the points where f[i] is larger than z[i]:
Timings:
N = 10
loopy 0.00594156 ms
vect 0.03193051 ms
N = 100
loopy 0.05560229 ms
vect 0.03186400 ms
N = 1000
loopy 0.57484017 ms
vect 0.04492043 ms
N = 10000
loopy 5.75115310 ms
vect 0.15519847 ms
N = 100000
loopy 57.30253551 ms
vect 1.69428380 ms
Code:
import numpy as np
import types
from timeit import timeit
def setup_data(N):
g = np.random.random((N,))
f = 2 + np.cumsum(np.random.random(N,))
return f, g
def f_loopy(f, g):
N, = f.shape
z = np.empty_like(f)
for i in range(N):
if i != 0:
z[i] = max(f[i], z[i-1] + g[i])
else:
z[0] = f[i]
return z
def f_vect(f, g):
N, = f.shape
gg = np.cumsum(g)
rmx = np.maximum.accumulate(f - gg)
sw = np.r_[0, 1 + np.flatnonzero(rmx[:-1] != rmx[1:]), N]
return gg + np.repeat(f[sw[:-1]]-gg[sw[:-1]], np.diff(sw))
for N in [10, 100, 1000, 10000, 100000]:
data = setup_data(N)
ref = f_loopy(*data)
print(f'N = {N}')
for name, func in list(globals().items()):
if not name.startswith('f_') or not isinstance(func, types.FunctionType):
continue
try:
assert np.allclose(ref, func(*data))
print("{:16s}{:16.8f} ms".format(name[2:], timeit(
'f(*data)', globals={'f':func, 'data':data}, number=100)*10))
except:
print("{:16s} apparently failed".format(name[2:]))

Avoiding numpy loops while calculating intersections

I'd like to speed up the following calculations handling r rays and n spheres. Here is what I got so far:
# shape of mu1 and mu2 is (r, n)
# shape of rays is (r, 3)
# note that intersections has 2n columns because for every sphere one can
# get up to two intersections (secant, tangent, no intersection)
intersections = np.empty((r, 2*n, 3))
for col in range(n):
intersections[:, col, :] = rays * mu1[:, col][:, np.newaxis]
intersections[:, col + n, :] = rays * mu2[:, col][:, np.newaxis]
# [...]
# calculate euclidean distance from the center of gravity (0,0,0)
distances = np.empty((r, 2 * n))
for col in range(n):
distances[:, col] = np.linalg.norm(intersections[:, col], axis=1)
distances[:, col + n] = np.linalg.norm(intersections[:, col + n], axis=1)
I tried speeding things up by avoiding the for-Loops, but couldn't figure out how to broadcast the arrays properly so that I only need a single function call. Any help is much appreciated.
Here's a vectorized way using broadcasting -
intersections = np.hstack((mu1,mu2))[...,None]*rays[:,None,:]
distances = np.sqrt((intersections**2).sum(2))
The last step could be replaced with an use of np.einsum like so -
distances = np.sqrt(np.einsum('ijk,ijk->ij',intersections,intersections))
Or replace almost the whole thing with np.einsum for another vectorized way, like so -
mu = np.hstack((mu1,mu2))
distances = np.sqrt(np.einsum('ij,ij,ik,ik->ij',mu,mu,rays,rays))
Runtime tests and verify outputs -
def original_app(mu1,mu2,rays):
intersections = np.empty((r, 2*n, 3))
for col in range(n):
intersections[:, col, :] = rays * mu1[:, col][:, np.newaxis]
intersections[:, col + n, :] = rays * mu2[:, col][:, np.newaxis]
distances = np.empty((r, 2 * n))
for col in range(n):
distances[:, col] = np.linalg.norm(intersections[:, col], axis=1)
distances[:, col + n] = np.linalg.norm(intersections[:, col + n], axis=1)
return distances
def vectorized_app1(mu1,mu2,rays):
intersections = np.hstack((mu1,mu2))[...,None]*rays[:,None,:]
return np.sqrt((intersections**2).sum(2))
def vectorized_app2(mu1,mu2,rays):
intersections = np.hstack((mu1,mu2))[...,None]*rays[:,None,:]
return np.sqrt(np.einsum('ijk,ijk->ij',intersections,intersections))
def vectorized_app3(mu1,mu2,rays):
mu = np.hstack((mu1,mu2))
return np.sqrt(np.einsum('ij,ij,ik,ik->ij',mu,mu,rays,rays))
Timings -
In [101]: # Inputs
...: r = 1000
...: n = 1000
...: mu1 = np.random.rand(r, n)
...: mu2 = np.random.rand(r, n)
...: rays = np.random.rand(r, 3)
In [102]: np.allclose(original_app(mu1,mu2,rays),vectorized_app1(mu1,mu2,rays))
Out[102]: True
In [103]: np.allclose(original_app(mu1,mu2,rays),vectorized_app2(mu1,mu2,rays))
Out[103]: True
In [104]: np.allclose(original_app(mu1,mu2,rays),vectorized_app3(mu1,mu2,rays))
Out[104]: True
In [105]: %timeit original_app(mu1,mu2,rays)
...: %timeit vectorized_app1(mu1,mu2,rays)
...: %timeit vectorized_app2(mu1,mu2,rays)
...: %timeit vectorized_app3(mu1,mu2,rays)
...:
1 loops, best of 3: 306 ms per loop
1 loops, best of 3: 215 ms per loop
10 loops, best of 3: 140 ms per loop
10 loops, best of 3: 136 ms per loop

Optimize a numpy ndarray indexing operation

I have a numpy operation that looks like the following:
for i in range(i_max):
for j in range(j_max):
r[i, j, x[i, j], y[i, j]] = c[i, j]
where x, y and c have the same shape.
Is it possible to use numpy's advanced indexing to speed this operation up?
I tried using:
i = numpy.arange(i_max)
j = numpy.arange(j_max)
r[i, j, x, y] = c
However, I didn't get the result I expected.
Using linear indexing -
d0,d1,d2,d3 = r.shape
np.put(r,np.arange(i_max)[:,None]*d1*d2*d3 + np.arange(j_max)*d2*d3 + x*d3 +y,c)
Benchmarking and verification
Define functions -
def linear_indx(r,x,y,c,i_max,j_max):
d0,d1,d2,d3 = r.shape
np.put(r,np.arange(i_max)[:,None]*d1*d2*d3 + np.arange(j_max)*d2*d3 + x*d3 +y,c)
return r
def org_app(r,x,y,c,i_max,j_max):
for i in range(i_max):
for j in range(j_max):
r[i, j, x[i,j], y[i,j]] = c[i,j]
return r
Setup input arrays and benchmark -
In [134]: # Setup input arrays
...: i_max = 40
...: j_max = 50
...: D0 = 60
...: D1 = 70
...: N = 80
...:
...: r = np.zeros((D0,D1,N,N))
...: c = np.random.rand(i_max,j_max)
...:
...: x = np.random.randint(0,N,(i_max,j_max))
...: y = np.random.randint(0,N,(i_max,j_max))
...:
In [135]: # Make copies for testing, as both functions make in-situ changes
...: r1 = r.copy()
...: r2 = r.copy()
...:
In [136]: # Verify results by comparing with original loopy approach
...: np.allclose(linear_indx(r1,x,y,c,i_max,j_max),org_app(r2,x,y,c,i_max,j_max))
Out[136]: True
In [137]: # Make copies for testing, as both functions make in-situ changes
...: r1 = r.copy()
...: r2 = r.copy()
...:
In [138]: %timeit linear_indx(r1,x,y,c,i_max,j_max)
10000 loops, best of 3: 115 µs per loop
In [139]: %timeit org_app(r2,x,y,c,i_max,j_max)
100 loops, best of 3: 2.25 ms per loop
The indexing arrays need to be broadcastable for this to work. The only change needed is to add an axis to the first index i to match the shape with the rest. The quick way to accomplish this is by indexing with None (which is equivalent to numpy.newaxis):
i = numpy.arange(i_max)
j = numpy.arange(j_max)
r[i[:,None], j, x, y] = c

How to vectorize this loop difference in numpy?

I feel like there should be a quick way of speeding up this code. I think the answer is here, but I cannot seem to get my problem in that format. The underlying problem that I am attempting to solve is find the point wise difference in terms of the parallel and perpendicular components and create a 2d histogram of these differences.
out = np.zeros((len(rpbins)-1,len(pibins)-1))
tmp = np.zeros((len(x),2))
for i in xrange(len(x)):
tmp[:,0] = x - x[i]
tmp[:,1] = y - y[i]
para = np.sum(tmp**2,axis=-1)**(1./2)
perp = np.abs(z - z[i])
H, _, _ = np.histogram2d(para, perp, bins=[rpbins, pibins])
out += H
Vectorizing things like this is tricky, because to get rid of a loop over n elements you have to construct an array of (n, n), so for large inputs you are likely to get a worse performance than with a Python loop. But it can be done:
mask = np.triu_indices(x.shape[0], 1)
para = np.sqrt((x[:, None] - x)**2 + (y[:, None] - y)**2)
perp = np.abs(z[:, None] - z)
hist, _, _ = np.histogram2d(para[mask], perp[mask], bins=[rpbins, pibins])
The mask is to avoid counting each distance twice. I have also set the diagonal offset to 1 to avoid including the 0 distances of each point to itself in the histogram. But if you don't index para and perp with it, you get the exact same result as your code.
With this sample data:
items = 100
rpbins, pibins = np.linspace(0, 1, 3), np.linspace(0, 1, 3)
x = np.random.rand(items)
y = np.random.rand(items)
z = np.random.rand(items)
I get this for my hist and your out:
>>> hist
array([[ 1795., 651.],
[ 1632., 740.]])
>>> out
array([[ 3690., 1302.],
[ 3264., 1480.]])
and out[i, j] = 2 * hist[i, j] except for i = j = 0, where out[0, 0] = 2 * hist[0, 0] + items because of the 0 distance of each item to itself.
EDIT Tried the following after tcaswell's comment:
items = 1000
rpbins, pibins = np.linspace(0, 1, 3), np.linspace(0, 1, 3)
x, y, z = np.random.rand(3, items)
def hist1(x, y, z, rpbins, pibins) :
mask = np.triu_indices(x.shape[0], 1)
para = np.sqrt((x[:, None] - x)**2 + (y[:, None] - y)**2)
perp = np.abs(z[:, None] - z)
hist, _, _ = np.histogram2d(para[mask], perp[mask], bins=[rpbins, pibins])
return hist
def hist2(x, y, z, rpbins, pibins) :
mask = np.triu_indices(x.shape[0], 1)
para = np.sqrt((x[:, None] - x)[mask]**2 + (y[:, None] - y)[mask]**2)
perp = np.abs((z[:, None] - z)[mask])
hist, _, _ = np.histogram2d(para, perp, bins=[rpbins, pibins])
return hist
def hist3(x, y, z, rpbins, pibins) :
mask = np.triu_indices(x.shape[0], 1)
para = np.sqrt(((x[:, None] - x)**2 + (y[:, None] - y)**2)[mask])
perp = np.abs((z[:, None] - z)[mask])
hist, _, _ = np.histogram2d(para, perp, bins=[rpbins, pibins])
return hist
In [10]: %timeit -n1 -r10 hist1(x, y, z, rpbins, pibins)
1 loops, best of 10: 289 ms per loop
In [11]: %timeit -n1 -r10 hist2(x, y, z, rpbins, pibins)
1 loops, best of 10: 294 ms per loop
In [12]: %timeit -n1 -r10 hist3(x, y, z, rpbins, pibins)
1 loops, best of 10: 278 ms per loop
It seems that most of the time is spent instantiating new arrays, not doing actual computations, so while there is some efficiency to scrape off, there really isn't much.

Categories