How to vectorize calculation of barycentric coordinates in python - python

In scipy.spatial there is the Delaunay function. The documentation includes an example of how to calculate barycentric coordinates.
Following that example, the following code will calculate barycentric coordinates using a loop.
points = np.array([(0,0),(0,1),(1,0),(1,1)])
samples = np.array([(0.5,0.5),(0,0),(0.1,0.1)])
dim = len(points[0]) # determine the dimension of the samples
simp = Delaunay(points) # create simplexes for the defined points
s = simp.find_simplex(samples) # for each sample, find corresponding simplex for each sample
b0 = np.zeros((len(samples),dim)) # reserve space for each barycentric coordinate
for ii in range(len(samples)):
b0[ii,:] = simp.transform[s[ii],:dim].dot((samples[ii] - simp.transform[s[ii],dim]).transpose())
coord = np.c_[b0, 1 - b0.sum(axis=1)]
This is ok for short list of samples to convert to barycentric coordinates, however for very large lists of samples, the performance is poor. How can this be modified to take advantage of vectorized math in numpy/scipy to improve performance?

Consider the following modification (for-loop replaced with numpy methods):
def f_1(points, samples):
""" original """
dim = len(points[0])
simp = ssp.Delaunay(points)
s = simp.find_simplex(samples)
b0 = np.zeros((len(samples), dim))
for ii in range(len(samples)):
b0[ii, :] = simp.transform[s[ii], :dim].dot(
(samples[ii] - simp.transform[s[ii], dim]).transpose())
coord = np.c_[b0, 1 - b0.sum(axis=1)]
return coord
def f_2(points, samples):
""" modified """
simp = ssp.Delaunay(points)
s = simp.find_simplex(samples)
b0 = (simp.transform[s, :points.shape[1]].transpose([1, 0, 2]) *
(samples - simp.transform[s, points.shape[1]])).sum(axis=2).T
coord = np.c_[b0, 1 - b0.sum(axis=1)]
return coord
Test case:
N = 100
points = np.array(list(itertools.product(range(N), repeat=2)))
samples = np.random.rand(100_000, 2) * N
Result:
%timeit f_1(points, samples)
712 ms ± 2.76 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit f_2(points, samples)
422 ms ± 809 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
With modified version the line simp.find_simplex(samples) gives about 95% of the running time. So, I guess there is nothing else you can do with vectorization. To improve perfomance further you need another implementation of find_simplex method or another approach to the problem.

Related

Efficient way to do a large number of regressions using numpy?

I have a large collection (26,214,400 to be exact) of sets of data I want to perform a linear regressions on, i.e. each of the 26,214,400 data sets consists of n x values and n y values and I want to find y = m * x + b. For any set of points I can use sklearn or numpy.linalg.lstsq, something like:
A = np.vstack([x, np.ones(len(x))]).T
m, b = np.linalg.lstsq(A, y, rcond=None)[0]
Is there a way to set up the matrices such that I can avoid a python loop through 26,214,400 items? Or do I have to use a loop and would be better served using something like Numba?
I ended up going the numba route which yielded a ~20x speed up on my laptop, it used all my cores so I assume more CPUs the better. The answer looked something like
import numpy as np
from numpy.linalg import lstsq
import numba
#numba.jit(nogil=True, parallel=True)
def fit(XX, yy):
""""Fit a large set of points to a regression"""
assert XX.shape == yy.shape, "Inputs mismatched"
n_pnts, n_samples = XX.shape
scale = np.empty(n_pnts)
offset = np.empty(n_pnts)
for i in numba.prange(n_pnts):
X, y = XX[i], yy[i]
A = np.vstack((np.ones_like(X), X)).T
offset[i], scale[i] = lstsq(A, y)[0]
return offset, scale
Running it:
XX, yy = np.random.randn(2, 1000, 10)
offset, scale = fit(XX, yy)
%timeit offset, scale = fit(XX, yy)
1.87 ms ± 37.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
The non-jitted version has this timing:
41.7 ms ± 620 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

Discrete Fourier Transform, speed up calculation

Input: hermitian matrix \rho_{i,j} with i,j=0,1,..d-1
Output: neg=\sum |W(mu,m)|-W(mu,m), the sum over all mu,m=0,..d-1 and
W(mu,m)=\sum exp(-4i\pi mu n /d) \rho_{(m-n)%d,(m+n)%d}, where n=0,..d-1
Problem: 1) for large d (d>5 000) a direct method (see snippet 1) is rather slow.
2) making use of 'np.fft.fft()' is much faster, but in the definition exponent with 2 is used rather than 4
https://docs.scipy.org/doc/numpy/reference/routines.fft.html#module-numpy.fft
Is it possible to improve snippet 1 using snippet 2 to boost speed calculation? May be one can use 2D fft?
Snippet 1:
W=np.zeros([d,d])
neg=0
for mu in range(d):
for m in range(d):
x=0
for n in range(d):
x+=np.exp(-4*np.pi*1.0j*mu*n/N)*rho[(m-n)%d,(m+n)%d]
W[mu,m]=x.real
neg+=np.abs(W[mu,m])-W[mu,m]
Snippet 2:
# create matrix \rho
psi=np.random.rand(500)
psi=psi/np.linalg.norm(psi) # normalize it
d=len(psi)
rho=np.outer(psi,np.conj(psi))
#
start_time=time.time()
m=1 # for example, use particular m
a=np.array([rho[(m-nn)%d,(m+nn)%d] for nn in range(d)])
ft=np.fft.fft(a)
end_time=time.time()
print(end_time-start_time)
Remove the nested loops by exploiting numpys array arithmetics.
import numpy as np
def my_sins(x):
return np.sin(2*x) + np.cos(4*x) + np.sin(3*x)
def dft(x, n=None):
if n is None:
n = len(x)
k = len(x)
cn = np.sum(x*np.exp(-2*np.pi*1j*np.outer(np.arange(k),np.arange(n))/n),1)
return cn
For some sample data
x = np.linspace(0,2*np.pi,1000)
y = my_sins(x)
%timeit dft(y)
On my system this yields:
145 ms ± 953 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

More efficient weighted Gini coefficient in Python

Per https://stackoverflow.com/a/48981834/1840471, this is an implementation of the weighted Gini coefficient in Python:
import numpy as np
def gini(x, weights=None):
if weights is None:
weights = np.ones_like(x)
# Calculate mean absolute deviation in two steps, for weights.
count = np.multiply.outer(weights, weights)
mad = np.abs(np.subtract.outer(x, x) * count).sum() / count.sum()
rmad = mad / np.average(x, weights=weights)
# Gini equals half the relative mean absolute deviation.
return 0.5 * rmad
This is clean and works well for medium-sized arrays, but as warned in its initial suggestion (https://stackoverflow.com/a/39513799/1840471) it's O(n2). On my computer that means it breaks after ~20k rows:
n = 20000 # Works, 30000 fails.
gini(np.random.rand(n), np.random.rand(n))
Can this be adjusted to work for larger datasets? Mine is ~150k rows.
Here is a version which is much faster than the one you provided above, and also uses a simplified formula for the case without weight to get even faster results in that case.
def gini(x, w=None):
# The rest of the code requires numpy arrays.
x = np.asarray(x)
if w is not None:
w = np.asarray(w)
sorted_indices = np.argsort(x)
sorted_x = x[sorted_indices]
sorted_w = w[sorted_indices]
# Force float dtype to avoid overflows
cumw = np.cumsum(sorted_w, dtype=float)
cumxw = np.cumsum(sorted_x * sorted_w, dtype=float)
return (np.sum(cumxw[1:] * cumw[:-1] - cumxw[:-1] * cumw[1:]) /
(cumxw[-1] * cumw[-1]))
else:
sorted_x = np.sort(x)
n = len(x)
cumx = np.cumsum(sorted_x, dtype=float)
# The above formula, with all weights equal to 1 simplifies to:
return (n + 1 - 2 * np.sum(cumx) / cumx[-1]) / n
Here is some test code to check we get (mostly) the same results:
>>> x = np.random.rand(1000000)
>>> w = np.random.rand(1000000)
>>> gini_max_ghenis(x, w)
0.33376310938610521
>>> gini(x, w)
0.33376310938610382
But the speed is very different:
%timeit gini(x, w)
203 ms ± 3.68 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit gini_max_ghenis(x, w)
55.6 s ± 3.35 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
If you remove the pandas ops from the function, it is already much faster:
%timeit gini_max_ghenis_no_pandas_ops(x, w)
1.62 s ± 75 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
If you want to get the last drop of performance you could use numba or cython but that would only gain a few percent because most of the time is spent in sorting.
%timeit ind = np.argsort(x); sx = x[ind]; sw = w[ind]
180 ms ± 4.82 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
edit: gini_max_ghenis is the code used in Max Ghenis' answer
Adapting the StatsGini R function from here:
import numpy as np
import pandas as pd
def gini(x, w=None):
# Array indexing requires reset indexes.
x = pd.Series(x).reset_index(drop=True)
if w is None:
w = np.ones_like(x)
w = pd.Series(w).reset_index(drop=True)
n = x.size
wxsum = sum(w * x)
wsum = sum(w)
sxw = np.argsort(x)
sx = x[sxw] * w[sxw]
sw = w[sxw]
pxi = np.cumsum(sx) / wxsum
pci = np.cumsum(sw) / wsum
g = 0.0
for i in np.arange(1, n):
g = g + pxi.iloc[i] * pci.iloc[i - 1] - pci.iloc[i] * pxi.iloc[i - 1]
return g
This works for large vectors, at least up to 10M rows:
n = 1e7
gini(np.random.rand(n), np.random.rand(n)) # Takes ~15s.
It also produces the same result as the function provided in the question, for example giving 0.2553 for this example:
gini(np.array([3, 1, 6, 2, 1]), np.array([4, 2, 2, 10, 1]))

Perform mulitple comparisons (interval) in a Numpy array in a single pass

I have a situation similar to the "Count values in a certain range" question, but instead of a column-vector I have a matrix intervals with two columns [upper, lower] and another column vector true_values.
I want to check whether the values in the true_values vector are within the ranges defined [upper, lower], element wise.
The answer provided in the linked question would do 4 passes:
((true_values >= intervals[:, 0]) & (true_values <= intervals[:, 1])).sum()
One pass for each greater/less than check, one for the and clause, and one for the sum.
Given that these are potentially huge matrices, I'm wondering if it's possible to reduce the number of passes necessary, ideally to one pass for the interval checks, and one for the sum (I think unavoidable), I was thinking something like broadcasting a function over intervals' rows.
Here's a minimal example:
import numpy as np
from sklearn.ensemble import GradientBoostingRegressor
n_samples = 2000
n_features = 10
rng = np.random.RandomState(0)
X = rng.normal(size=(n_samples, n_features))
w = rng.normal(size=n_features)
# simple linear function without noise
y = np.dot(X, w)
gbrt = GradientBoostingRegressor(loss='quantile', alpha=0.95)
gbrt.fit(X, y)
# Get upper interval
upper_interval = gbrt.predict(X)
# Get lower interval
gbrt.set_params(alpha=0.05)
gbrt.fit(X, y)
lower_interval = gbrt.predict(X)
intervals = np.concatenate((lower_interval[:, np.newaxis], upper_interval[:, np.newaxis]), axis=1)
# This is 4 passes:
perc_correct_intervals = ((y >= intervals[:, 0]) & (y <= intervals[:, 1])).sum() / y.shape[0]
some savings with np.count_nonzero vs .sum(), more if you don't really need to make the intervals matrix for other uses
%%timeit
intervals = np.concatenate((lower_interval[:, np.newaxis], upper_interval[:, np.newaxis]), axis=1);
perc_correct_intervals = ((y >= intervals[:, 0]) & (y <= intervals[:, 1])).sum() / y.shape[0]
15.7 µs ± 78.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%%timeit
np.count_nonzero(np.less(lower_interval, y)*np.less(y, upper_interval))/y.size
3.93 µs ± 28 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

Best way to implement numpy.sin(x) / x where x might contain 0

What I am doing now is:
import numpy as np
eps = np.finfo(float).eps
def sindiv(x):
x = np.abs(x)
return np.maximum(eps, np.sin(x)) / np.maximum(eps, x)
But there is quite a lot of additional array operation. Is there a better way?
You could use numpy.sinc, which computes sin(pi x)/(pi x):
In [20]: x = 2.4
In [21]: np.sin(x)/x
Out[21]: 0.28144299189631289
In [22]: x_over_pi = x / np.pi
In [23]: np.sinc(x_over_pi)
Out[23]: 0.28144299189631289
In [24]: np.sinc(0)
Out[24]: 1.0
In numpy array notation (so you get back a np array):
def sindiv(x):
return np.where(np.abs(x) < 0.01, 1.0 - x*x/6.0, np.sin(x)/x)
Here I've made "epsilon" fairly large for testing and used the first two terms of the taylor series for the approximation. In practice, I'd change 0.01 to some small multiple of your eps (machine epsilon).
xx = np.arange(-0.1, 0.1, 0.001)
yy = sinxdiv(xx)
type(yy)
outputs numpy.ndarray and the values are continuous (and differentiable, if that's important) near the origin.
If you don't want the double evaluation (i.e. both branches are evaluated in the above), then I think you have to go with a loop as I don't believe there is any sort of "lazy where" option.
def sindiv(x):
sox = np.zeros(x.size)
for i in xrange(x.size):
xv = x[i]
if np.abs(xv) < 0.001: # For testing, use a small multiple of machine epsilon
sox[i] = 1.0 - xv * xv / 6.0
else:
sox[i] = np.sin(xv) / xv
return sox
To make this really pythonic though it would be best to check the type of x and just do the non-array version if it is not an array.
As others have said, numpy.sinc() is the easiest.
I want to include a copy of its current implementation in NumPy 1.21.2 (link) to show there's no special tricks:
y = pi * where(x == 0, 1.0e-20, x)
return sin(y)/y
It's basically just sin(x)/x. Note that in creating y: multiplication by pi, where(), and x == 0 will create at least 2 intermediate arrays plus the final array for y. And then sin(y)/y creates two more arrays. In total at least 5 arrays are created by numpy.sinc(); and by my count your sindiv() also creates at least 5 arrays, so it's not actually that wasteful.
Here is another implementation:
TINY = np.finfo(float).tiny # ≈ 2e-308 (smallest 'normal' float)
def mysinc(x):
y = np.abs(np.pi*x) + TINY
return np.sin(y)/y
I'm pretty sure this returns identical values to numpy.sinc(). The reason being sin(x) == x for relatively 'large' values of x:
x = np.ldexp(1, -26, dtype=np.double) # x = 2**-26 ≈ 1.5e-8
print(np.sin(x) == x) # True
x = np.ldexp(1, -32, dtype=np.longdouble) # x = 2**-32 ≈ 2.3e-10
print(np.sin(x) == x) # True
So for small enough x (ignore pi factors), mysinc(x) = (x+TINY)/(x+TINY) = x/x = np.sinc(x). The exact threshold this happens does not matter too much so long as TINY < np.spacing(x) when it occurs so that x + TINY = x in this regime.
(The cutoff is around the square-root of the machine epsilon as can be understood from the Taylor series sin(x) = x - x**3/6 + ... = x(1-x**2/6) + .... So TINY is always small enough to not matter.)
Timings
import numpy as np
eps = np.finfo(float).eps
tiny = np.finfo(float).tiny
def npsinc(x):
y = np.pi * np.where(x == 0, 1.0e-20, x)
return np.sin(y)/y
def sindiv(x):
x = np.pi * np.abs(x)
return np.maximum(eps, np.sin(x)) / np.maximum(eps, x)
def mysinc(x):
y = np.abs(np.pi*x) + tiny
return np.sin(y)/y
def mysinc2(x):
y = np.abs(np.pi*x)
y += tiny # in-place addition
return np.sin(y)/y
# Test data
x = np.random.rand(100)
x[np.random.randint(100, size=10)] = 0
%timeit npsinc(x)
# 10.9 µs ± 18.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit sindiv(x)
# 9.4 µs ± 12.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit mysinc(x)
# 7.38 µs ± 15.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit mysinc2(x)
# 8.64 µs ± 20.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Curiously using mysinc2() with in-place addition seems to be slower, and using in-place numpy.abs() and in-place numpy.sin() is even slower. Not entirely sure why, but see this related question.
Regardless, if you really need performance, you can try using Cython to generate C code and do things properly instead of playing tricks with NumPy:
%%cython
from libc.math cimport M_PI, sin
cimport cython
cimport numpy as np
import numpy as np
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.cdivision(True)
cdef _cysinc(double[:] x, double[:] out):
cdef size_t i
for i in range(x.shape[0]):
if x[i] == 0:
out[i] = 1
else:
out[i] = sin(M_PI*x[i])/(M_PI*x[i])
def cysinc(np.ndarray x):
out = np.empty_like(x)
_cysinc(x.ravel(), out.ravel())
return out
%timeit cysinc(x)
# 4.38 µs ± 11.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
As always, don't prematurely optimize, just use numpy.sinc() to begin with.
Side note
There's a question Is boost::math::sinc_pi unnecessarily complicated? that asks about the benefits of using a Taylor expansion about x=0. In summary, almost none, but maybe they are doing it for other reasons.
To emphasise, there is nothing unstable about floating point division, or dividing a small number by a small number since you're just dividing the significands and subtracting the exponents.
If you calculate sinc(x) as sin(x)/x, instead of a direct Taylor series or other method that sums to convergence beyond the machine epsilon np.spacing(sinc(x)), you will be off by at most np.spacing(sinc(x)) coming from the round-off error in division /, just as you'd get with multiplication *. (Assuming no subnormal business, which even here does not matter in the treatment of sin(x)/x.)
What about allowing div by zero and replace NaNs later?
import numpy as np
def sindiv(x):
a = np.sin(x)/x
a = np.nan_to_num(a)
return a
If you don't want warnings, supress them via seterr
Of course, using a could be eliminated:
def sindiv(x):
return np.nan_to_num(np.sin(x)/x)

Categories