Numpy array multiplication of LDL^T factorization of symmetric matrix - python

Suppose I have an "LDL^T" decomposition of a symmetric, positive-semidefinite matrix A (numpy array), and I would like to multiply all factors together to obtain A.
What is the most efficient way to achieve this?
Currently, I am doing (D is available as "vector"):
np.dot(np.dot(L, np.diag(D)), L.T),
which is quite obviously a bad solution.

Approach #1
We could use elementwise multiplication and then matrix-multiplication. This basically replaces np.dot(L, np.diag(D)) with a direct element-wise multiplication for hopefully some speedup. So, with it, the implementation would become -
(L*D).dot(L.T)
Approach #2
Another approach could be with np.einsum to do all those things in one-go, like so -
np.einsum('ij,j,kj->ik',L,D,L)
Runtime test
In [303]: L = np.random.randint(0,9,(1000,1000))
In [304]: D = np.random.randint(0,9,(1000))
In [305]: %timeit np.dot(np.dot(L, np.diag(D)), L.T)
1 loops, best of 3: 3.87 s per loop
In [306]: %timeit (L*D).dot(L.T)
1 loops, best of 3: 1.39 s per loop
In [307]: %timeit np.einsum('ij,j,kj->ik',L,D,L)
1 loops, best of 3: 1.71 s per loop

Related

Vectorized sum-reduction with outer product - NumPy

I'm relatively new to NumPy and often read that you should avoid to write loops. In many cases I understand how to deal with that, but at the moment I have the following code:
p = np.arange(15).reshape(5,3)
w = np.random.rand(5)
A = np.sum(w[i] * np.outer(p[i], p[i]) for i in range(len(p)))
Does anybody know if there is there a way to avoid the inner for loop?
Thanks in advance!
Approach #1 : With np.einsum -
np.einsum('ij,ik,i->jk',p,p,w)
Approach #2 : With broadcasting + np.tensordot -
np.tensordot(p[...,None]*p[:,None], w, axes=((0),(0)))
Approach #3 : With np.einsum + np.dot -
np.einsum('ij,i->ji',p,w).dot(p)
Runtime test
Set #1 :
In [653]: p = np.random.rand(50,30)
In [654]: w = np.random.rand(50)
In [655]: %timeit np.einsum('ij,ik,i->jk',p,p,w)
10000 loops, best of 3: 101 µs per loop
In [656]: %timeit np.tensordot(p[...,None]*p[:,None], w, axes=((0),(0)))
10000 loops, best of 3: 124 µs per loop
In [657]: %timeit np.einsum('ij,i->ji',p,w).dot(p)
100000 loops, best of 3: 9.07 µs per loop
Set #2 :
In [658]: p = np.random.rand(500,300)
In [659]: w = np.random.rand(500)
In [660]: %timeit np.einsum('ij,ik,i->jk',p,p,w)
10 loops, best of 3: 139 ms per loop
In [661]: %timeit np.einsum('ij,i->ji',p,w).dot(p)
1000 loops, best of 3: 1.01 ms per loop
The third approach just blew everything else!
Why Approach #3 is 10x-130x faster than Approach #1?
np.einsum is implemented in C. In the first approach, with those three strings there i,j,k in its string-notation, we would have three nested loops (in C of course). That's a lot of memory overhead there.
With the third approach, we are only getting into two strings i, j, hence two nested loops (in C again) and also leveraging BLAS based matrix-multiplication with that np.dot. These two factors are responsible for the amazing speedup with this one.

Average of numpy array ignoring specified value

I have a number of 1-dimensional numpy ndarrays containing the path length between a given node and all other nodes in a network for which I would like to calculate the average. The matter is complicated though by the fact that if no path exists between two nodes the algorithm returns a value of 2147483647 for that given connection. If I leave this value untreated it would obviously grossly inflate my average as a typical path length would be somewhere between 1 and 3 in my network.
One option of dealing with this would be to loop through all elements of all arrays and replace 2147483647 with NaN and then use numpy.nanmean to find the average though that is probably not the most efficient method of going about it. Is there a way of calculating the average with numpy just ignoring all values of 2147483647?
I should add that, I could have up to several million arrays with several million values to average over so any performance gain in how the average is found will make a real difference.
Why not using your usual numpy filtering for this?
m = my_array[my_array != 2147483647].mean()
By the way, if you really want speed, your whole algorithm description seems certainly naive and could be improved by a lot.
Oh and I guess that you are calculating the mean because you have rigorously checked that the underlying distribution is normal so that it means something, aren't you?
np.nanmean(np.where(my_array == 2147483647, np.nan, my_array))
Timings
a = np.random.randn(100000)
a[::10] = 2147483647
%timeit np.nanmean(np.where(a == 2147483647, np.nan, a))
1000 loops, best of 3: 639 µs per loop
%timeit a[a != 2147483647].mean()
1000 loops, best of 3: 259 µs per loop
import pandas as pd
%timeit pd.Series(a).ne(2147483647).mean()
1000 loops, best of 3: 493 µs per loop
One way would be to get the sum for all elements in one go and then removing the contribution from the invalid ones. Finally, we need to get the average value itself, divide by the number of valid elements. So, we would have an implementation like so -
def mean_ignore_num(arr,num):
# Get count of invalid ones
invc = np.count_nonzero(arr==num)
# Get the average value for all numbers and remove contribution from num
return (arr.sum() - invc*num)/float(arr.size-invc)
Verify results -
In [191]: arr = np.full(10,2147483647).astype(np.int32)
...: arr[1] = 5
...: arr[4] = 4
...:
In [192]: arr.max()
Out[192]: 2147483647
In [193]: arr.sum() # Extends beyond int32 max limit, so no overflow
Out[193]: 17179869185
In [194]: arr[arr != 2147483647].mean()
Out[194]: 4.5
In [195]: mean_ignore_num(arr,2147483647)
Out[195]: 4.5
Runtime test -
In [38]: arr = np.random.randint(0,9,(10000))
In [39]: arr[arr != 7].mean()
Out[39]: 3.6704609489462414
In [40]: mean_ignore_num(arr,7)
Out[40]: 3.6704609489462414
In [41]: %timeit arr[arr != 7].mean()
10000 loops, best of 3: 102 µs per loop
In [42]: %timeit mean_ignore_num(arr,7)
10000 loops, best of 3: 36.6 µs per loop

Add scipy sparse row matrix to another sparse matrix

I have a csr_matrix A of shape (70000, 80000) and another csr_matrix Bof shape (1, 80000). How can I efficiently add B to every row of A? One idea is to somehow create a sparse matrix B' which is rows of B repeated, but numpy.repeat does not work and using a matrix of ones to create B' is very memory inefficient.
I also tried iterating through every row of A and adding B to it, but that again is very time inefficient.
Update:
I tried something very simple which seems to be very efficient than the ideas I mentioned above. The idea is to use scipy.sparse.vstack:
C = sparse.vstack([B for x in range(A.shape[0])])
A + C
This performs well for my task! Few more realizations: I initially tried an iterative approach where I called vstackmultiple times, this approach is slower than calling it just once.
A + B[np.zeros(A.shape[0])] is another way to expand B to the same shape as A.
It has about the same performance and memory footprint as Warren Weckesser's solution:
import numpy as np
import scipy.sparse as sparse
N, M = 70000, 80000
A = sparse.rand(N, M, density=0.001).tocsr()
B = sparse.rand(1, M, density=0.001).tocsr()
In [185]: %timeit u = sparse.csr_matrix(np.ones((A.shape[0], 1), dtype=B.dtype)); Bp = u * B; A + Bp
1 loops, best of 3: 284 ms per loop
In [186]: %timeit A + B[np.zeros(A.shape[0])]
1 loops, best of 3: 280 ms per loop
and appears to be faster than using sparse.vstack:
In [187]: %timeit A + sparse.vstack([B for x in range(A.shape[0])])
1 loops, best of 3: 606 ms per loop

Numpy: Difference between a[i][j] and a[i,j]

Coming from a Lists background in Python and that of programming languages like C++/Java, one is used to the notation of extracting elements using a[i][j] approach. But in NumPy, one usually does a[i,j]. Both of these would return the same result.
What is the fundamental difference between the two and which should be preferred?
The main difference is that a[i][j] first creates a view onto a[i] and then indexes into that view. On the other hand, a[i,j] indexes directly into a, making it faster:
In [9]: a = np.random.rand(1000,1000)
In [10]: %timeit a[123][456]
1000000 loops, best of 3: 586 ns per loop
In [11]: %timeit a[123,456]
1000000 loops, best of 3: 234 ns per loop
For this reason, I'd prefer the latter.

Fast check for NaN in NumPy

I'm looking for the fastest way to check for the occurrence of NaN (np.nan) in a NumPy array X. np.isnan(X) is out of the question, since it builds a boolean array of shape X.shape, which is potentially gigantic.
I tried np.nan in X, but that seems not to work because np.nan != np.nan. Is there a fast and memory-efficient way to do this at all?
(To those who would ask "how gigantic": I can't tell. This is input validation for library code.)
Ray's solution is good. However, on my machine it is about 2.5x faster to use numpy.sum in place of numpy.min:
In [13]: %timeit np.isnan(np.min(x))
1000 loops, best of 3: 244 us per loop
In [14]: %timeit np.isnan(np.sum(x))
10000 loops, best of 3: 97.3 us per loop
Unlike min, sum doesn't require branching, which on modern hardware tends to be pretty expensive. This is probably the reason why sum is faster.
edit The above test was performed with a single NaN right in the middle of the array.
It is interesting to note that min is slower in the presence of NaNs than in their absence. It also seems to get slower as NaNs get closer to the start of the array. On the other hand, sum's throughput seems constant regardless of whether there are NaNs and where they're located:
In [40]: x = np.random.rand(100000)
In [41]: %timeit np.isnan(np.min(x))
10000 loops, best of 3: 153 us per loop
In [42]: %timeit np.isnan(np.sum(x))
10000 loops, best of 3: 95.9 us per loop
In [43]: x[50000] = np.nan
In [44]: %timeit np.isnan(np.min(x))
1000 loops, best of 3: 239 us per loop
In [45]: %timeit np.isnan(np.sum(x))
10000 loops, best of 3: 95.8 us per loop
In [46]: x[0] = np.nan
In [47]: %timeit np.isnan(np.min(x))
1000 loops, best of 3: 326 us per loop
In [48]: %timeit np.isnan(np.sum(x))
10000 loops, best of 3: 95.9 us per loop
I think np.isnan(np.min(X)) should do what you want.
There are two general approaches here:
Check each array item for nan and take any.
Apply some cumulative operation that preserves nans (like sum) and check its result.
While the first approach is certainly the cleanest, the heavy optimization of some of the cumulative operations (particularly the ones that are executed in BLAS, like dot) can make those quite fast. Note that dot, like some other BLAS operations, are multithreaded under certain conditions. This explains the difference in speed between different machines.
import numpy as np
import perfplot
def min(a):
return np.isnan(np.min(a))
def sum(a):
return np.isnan(np.sum(a))
def dot(a):
return np.isnan(np.dot(a, a))
def any(a):
return np.any(np.isnan(a))
def einsum(a):
return np.isnan(np.einsum("i->", a))
b = perfplot.bench(
setup=np.random.rand,
kernels=[min, sum, dot, any, einsum],
n_range=[2 ** k for k in range(25)],
xlabel="len(a)",
)
b.save("out.png")
b.show()
Even there exist an accepted answer, I'll like to demonstrate the following (with Python 2.7.2 and Numpy 1.6.0 on Vista):
In []: x= rand(1e5)
In []: %timeit isnan(x.min())
10000 loops, best of 3: 200 us per loop
In []: %timeit isnan(x.sum())
10000 loops, best of 3: 169 us per loop
In []: %timeit isnan(dot(x, x))
10000 loops, best of 3: 134 us per loop
In []: x[5e4]= NaN
In []: %timeit isnan(x.min())
100 loops, best of 3: 4.47 ms per loop
In []: %timeit isnan(x.sum())
100 loops, best of 3: 6.44 ms per loop
In []: %timeit isnan(dot(x, x))
10000 loops, best of 3: 138 us per loop
Thus, the really efficient way might be heavily dependent on the operating system. Anyway dot(.) based seems to be the most stable one.
If you're comfortable with numba it allows to create a fast short-circuit (stops as soon as a NaN is found) function:
import numba as nb
import math
#nb.njit
def anynan(array):
array = array.ravel()
for i in range(array.size):
if math.isnan(array[i]):
return True
return False
If there is no NaN the function might actually be slower than np.min, I think that's because np.min uses multiprocessing for large arrays:
import numpy as np
array = np.random.random(2000000)
%timeit anynan(array) # 100 loops, best of 3: 2.21 ms per loop
%timeit np.isnan(array.sum()) # 100 loops, best of 3: 4.45 ms per loop
%timeit np.isnan(array.min()) # 1000 loops, best of 3: 1.64 ms per loop
But in case there is a NaN in the array, especially if it's position is at low indices, then it's much faster:
array = np.random.random(2000000)
array[100] = np.nan
%timeit anynan(array) # 1000000 loops, best of 3: 1.93 µs per loop
%timeit np.isnan(array.sum()) # 100 loops, best of 3: 4.57 ms per loop
%timeit np.isnan(array.min()) # 1000 loops, best of 3: 1.65 ms per loop
Similar results may be achieved with Cython or a C extension, these are a bit more complicated (or easily avaiable as bottleneck.anynan) but ultimatly do the same as my anynan function.
use .any()
if numpy.isnan(myarray).any()
numpy.isfinite maybe better than isnan for checking
if not np.isfinite(prop).all()
Related to this is the question of how to find the first occurrence of NaN. This is the fastest way to handle that that I know of:
index = next((i for (i,n) in enumerate(iterable) if n!=n), None)
Adding to #nico-schlömer and #mseifert 's answers, I computed the performance of a numba-test has_nan with early stops, compared to some of the functions that will parse the full array.
On my machine, for an array without nans, the break-even happens for ~10^4 elements.
import perfplot
import numpy as np
import numba
import math
def min(a):
return np.isnan(np.min(a))
def dot(a):
return np.isnan(np.dot(a, a))
def einsum(a):
return np.isnan(np.einsum("i->", a))
#numba.njit
def has_nan(a):
for i in range(a.size - 1):
if math.isnan(a[i]):
return True
return False
def array_with_missing_values(n, p):
""" Return array of size n, p : nans ( % of array length )
Ex : n=1e6, p=1 : 1e4 nan assigned at random positions """
a = np.random.rand(n)
p = np.random.randint(0, len(a), int(p*len(a)/100))
a[p] = np.nan
return a
#%%
perfplot.show(
setup=lambda n: array_with_missing_values(n, 0),
kernels=[min, dot, has_nan],
n_range=[2 ** k for k in range(20)],
logx=True,
logy=True,
xlabel="len(a)",
)
What happens if the array has nans ? I investigated the impact of the nan-coverage of the array.
For arrays of length 1,000,000, has_nan becomes a better option is there are ~10^-3 % nans (so ~10 nans) in the array.
#%%
N = 1000000 # 100000
perfplot.show(
setup=lambda p: array_with_missing_values(N, p),
kernels=[min, dot, has_nan],
n_range=np.array([2 ** k for k in range(20)]) / 2**20 * 0.01,
logy=True,
xlabel=f"% of nan in array (N = {N})",
)
If in your application most arrays have nan and you're looking for ones without, then has_nan is the best approach.
Else; dot seems to be the best option.

Categories