Most Efficient Cumulative Summation in numba? - python

I am attempting to use the most time efficient cumsum possible on a 3D array, in python. I have tried numpy's cumsum, but found that simply using a manual parallelized method with numba:
import numpy as np
from numba import njit, prange
from timeit import default_timer as timer
#njit(parallel=True)
def cpu_cumsum(data, output):
for i in prange(200):
for j in prange(2000000):
output[i,j][0] = data[i,j][0]
for i in prange(1, 200):
for j in prange(1,2000000):
for k in range(1, 5):
output[i,j,k] = data[i,j,k] + output[i,j,k-1]
return output
data = np.float32(np.arange(2000000000).reshape(200, 2000000, 5))
output = np.empty_like(data)
func_start = timer()
output = cpu_cumsum(data, output)
timing=timer()-func_start
print("Function: manualCumSum duration (seconds):" + str(timing))
My method:
Function: manualCumSum duration (seconds):2.8496341188924994
np.cumsum:
Function: cumSum duration (seconds):6.182090314569933
While trying this with guvectorize, I found that it used too much memory for my GPU so I have since abandoned that avenue. Is there a better way to do this, or have I hit the end of the road?
PS: Speed needed due to looping around this many times.

Related

Is there any simple method to parallel np.einsum?

I would like to know, is there any simple method to parallel einsum in Numpy?
I found some discussions
Numpy np.einsum array multiplication using multiple cores
Any chance of making this faster? (numpy.einsum)
numpy.tensordot() only for binary contraction with a single axis, Numba needs to specify certain loops. Is there any simple and robust approach to parallel einsum (possibly including opt-einsum, tf-einsum etc) with arbitrary contractions?
A sample code is as following (if necessary I can use more complicated contraction as the example)
import numpy as np
import timeit
import time
na = nc = 1000
nb = 1000
n_iter = 10
A = np.random.random((na,nb))
B = np.random.random((nb,nc))
t_total = 0.
for i in range(n_iter):
start = time.time()
C = np.einsum('ij,jk->ik', A, B)
end = time.time()
t_total += end - start
print('AB->C',(t_total)/n_iter)

Numerical integration for matrix values in Python

I am trying to integrate over some matrix entries in Python. I want to avoid loops, because my tasks includes 1 Mio simulations. I am looking for a specification that will efficiently solve my problem.
I get the following error: only size-1 arrays can be converted to Python scalars
from scipy import integrate
import numpy.random as npr
n = 1000
m = 30
x = npr.standard_normal([n, m])
def integrand(k):
return k * x ** 2
integrate.quad(integrand, 0, 100)
This is a simplied example of my case. I have multiple nested functions, such that I cannot simple put x infront of the integral.
Well you might want to use parallel execution for this. It should be quite easy as long as you just want to execute integrate.quad 30000000 times. Just split your workload in little packages and give it to a threadpool. Of course the speedup is limited to the number of cores you have in your pc. I'm not a python programer but this should be possible. You can also increase epsabs and epsrel parameters in the quad function, depending on the implemetation this should speed up the programm as well. Of course you'll get a less precise result but this might be ok depending on your problem.
import threading
from scipy import integrate
import numpy.random as npr
n = 2
m = 3
x = npr.standard_normal([n,m])
def f(a):
for j in range(m):
integrand = lambda k: k * x[a,j]**2
i =integrate.quad(integrand, 0, 100)
print(i) ##write it to result array
for i in range(n):
threading.Thread(target=f(i)).start();
##better split it up even more and give it to a threadpool to avoid
##overhead because of thread init
This is maybe not the ideal solution but it should help a bit. You can use numpy.vectorize. Even the doc says: The vectorize function is provided primarily for convenience, not for performance. The implementation is essentially a for loop. But still, a %timeit on the simple example you provided shows a 2.3x speedup.
The implementation is
from scipy import integrate
from numpy import vectorize
import numpy.random as npr
n = 1000
m = 30
x = npr.standard_normal([n,m])
def g(x):
integrand = lambda k: k * x**2
return integrate.quad(integrand, 0, 100)
vg = vectorize(g)
res = vg(x)
quadpy (a project of mine) does vectorized quadrature:
import numpy
import numpy.random as npr
import quadpy
x = npr.standard_normal([1000, 30])
def integrand(k):
return numpy.multiply.outer(x ** 2, k)
scheme = quadpy.line_segment.gauss_legendre(10)
val = scheme.integrate(integrand, [0, 100])
This is much faster than all other answers.

Vectorize a for loop over pandas ewm function

So I want to vectorize a for loop to speed things up. my code is the following:
import numpy as np
import pandas as pd
def my_func(array, n):
return pd.Series(array).ewm(span = n, min_periods = n-1).mean().to_numpy()
np.random.seed(0)
data_size = 120000
data = np.random.uniform(0,1000, size = data_size)+29000
loop_size = 1000
step_size = 1
X = np.zeros([data.shape[0], loop_size])
parameter_array = np.arange(1,loop_size+ step_size, step_size)
for i in parameter_array:
X[:, i-1] = my_func(data, i)
The entire for-loop takes up about a min to finish, which could be a problem for future application. I have already checked the numpy.vectorize(), but it states clearly that it is for convenience only, so using it won't speed up the code by an order of magnitude.
My question is that is there a way to vectorize the for loop like this? If so, can I see a simple example of how this can be done?
Thank you in advance

Numeric Integration Python versus Matlab

My python code takes about 6.2 seconds to run. The Matlab code runs in under 0.05 seconds. Why is this and what can I do to speed up the Python code? Is Cython the solution?
Matlab:
function X=Test
nIter=1000000;
Step=.001;
X0=1;
X=zeros(1,nIter+1); X(1)=X0;
tic
for i=1:nIter
X(i+1)=X(i)+Step*(X(i)^2*cos(i*Step+X(i)));
end
toc
figure(1) plot(0:nIter,X)
Python:
nIter = 1000000
Step = .001
x = np.zeros(1+nIter)
x[0] = 1
start = time.time()
for i in range(1,1+nIter):
x[i] = x[i-1] + Step*x[i-1]**2*np.cos(Step*(i-1)+x[i-1])
end = time.time()
print(end - start)
How to speed up your Python code
Your largest time sink is np.cos which performs several checks on the format of the input.
These are relevant and usually negligible for high-dimensional inputs, but for your one-dimensional input, this becomes the bottleneck.
The solution to this is to use math.cos, which only accepts one-dimensional numbers as input and thus is faster (though less flexible).
Another time sink is indexing x multiple times.
You can speed this up by having one state variable which you update and only writing to x once per iteration.
With all of this, you can speed up things by a factor of roughly ten:
import numpy as np
from math import cos
nIter = 1000000
Step = .001
x = np.zeros(1+nIter)
state = x[0] = 1
for i in range(nIter):
state += Step*state**2*cos(Step*i+state)
x[i+1] = state
Now, your main problem is that your truly innermost loop happens completely in Python, i.e., you have a lot of wrapping operations that eat up time.
You can avoid this by using uFuncs (e.g., created with SymPy’s ufuncify) and using NumPy’s accumulate:
import numpy as np
from sympy.utilities.autowrap import ufuncify
from sympy.abc import t,y
from sympy import cos
nIter = 1000000
Step = 0.001
state = x[0] = 1
f = ufuncify([y,t],y+Step*y**2*cos(t+y))
times = np.arange(0,nIter*Step,Step)
times[0] = 1
x = f.accumulate(times)
This runs practically within an instant.
… and why that’s not what you should worry about
If your exact code (and only that) is what you care about, then you shouldn’t worry about runtime anyway, because it’s very short either way.
If on the other hand, you use this to gauge efficiency for problems with a considerable runtime, your example will fail because it considers only one initial condition and is a very simple dynamics.
Moreover, you are using the Euler method, which is either not very efficient or robust, depending on your step size.
The latter (Step) is absurdly low in your case, yielding much more data than you probably need:
With a step size of 1, You can see what’s going on just fine.
If you want a robust integration in such cases, it’s almost always best to use a modern adaptive integrator, that can adjust its step size itself, e.g., here is a solution to your problem using a native Python integrator:
from math import cos
import numpy as np
from scipy.integrate import solve_ivp
T = 1000
dt = 0.001
x = solve_ivp(
lambda t,state: state**2*cos(t+state),
t_span = (0,T),
t_eval = np.arange(0,T,dt),
y0 = [1],
rtol = 1e-5
).y
This automatically adjusts the step size to something higher, depending on the error tolerance rtol.
It still returns the same amount of output data, but that’s via interpolation of the solution.
It runs in 0.3 s for me.
How to speed up things in a scalable manner
If you still need to speed up something like this, chances are that your derivative (f) is considerably more complex than in your example and thus it is the bottleneck.
Depending on your problem, you may be able to vectorise its calcultion (using NumPy or similar).
If you can’t vectorise, I wrote a module that specifically focusses on this by hard-coding your derivative under the hood.
Here is your example in with a sampling step of 1.
import numpy as np
from jitcode import jitcode,y,t
from symengine import cos
T = 1000
dt = 1
ODE = jitcode([y(0)**2*cos(t+y(0))])
ODE.set_initial_value([1])
ODE.set_integrator("dop853")
x = np.hstack([ODE.integrate(t) for t in np.arange(0,T,dt)])
This runs again within an instant. While this may not be a relevant speed boost here, this is scalable to huge systems.
The difference is jit-compilation, which Matlab uses per default. Let's try your example with Numba(a Python jit-compiler)
Code
import numba as nb
import numpy as np
import time
nIter = 1000000
Step = .001
#nb.njit()
def integrate(nIter,Step):
x = np.zeros(1+nIter)
x[0] = 1
for i in range(1,1+nIter):
x[i] = x[i-1] + Step*x[i-1]**2*np.cos(Step*(i-1)+x[i-1])
return x
#Avoid measuring the compilation time,
#this would be also recommendable for Matlab to have a fair comparison
res=integrate(nIter,Step)
start = time.time()
for i in range(100):
res=integrate(nIter,Step)
end=time.time()
print((end - start)/100)
This results in 0.022s runtime per call.

Why is numpy.power 60x slower than in-lining?

Maybe I'm doing something odd, but maybe found a surprising performance loss when using numpy, seems consistent regardless of the power used. For instance when x is a random 100x100 array
x = numpy.power(x,3)
is about 60x slower than
x = x*x*x
A plot of the speed up for various array sizes reveals a sweet spot with arrays around size 10k and a consistent 5-10x speed up for other sizes.
Code to test below on your own machine (a little messy):
import numpy as np
from matplotlib import pyplot as plt
from time import time
ratios = []
sizes = []
for n in np.logspace(1,3,20).astype(int):
a = np.random.randn(n,n)
inline_times = []
for i in range(100):
t = time()
b = a*a*a
inline_times.append(time()-t)
inline_time = np.mean(inline_times)
pow_times = []
for i in range(100):
t = time()
b = np.power(a,3)
pow_times.append(time()-t)
pow_time = np.mean(pow_times)
sizes.append(a.size)
ratios.append(pow_time/inline_time)
plt.plot(sizes,ratios)
plt.title('Performance of inline vs numpy.power')
plt.ylabel('Nx speed-up using inline')
plt.xlabel('Array size')
plt.xscale('log')
plt.show()
Anyone have an explanation?
It's well known that multiplication of doubles, which your processor can do in a very fancy way, is very, very fast. pow is decidedly slower.
Some performance guides out there even advise people to plan for this, perhaps even in some way that might be a bit overzealous at times.
numpy special-cases squaring to make sure it's not too, too slow, but it sends cubing right off to your libc's pow, which isn't nearly as fast as a couple multiplications.
I suspect the issue is that np.power always does float exponentiation, and it doesn't know how to optimize or vectorize that on your platform (or, probably, most/all platforms), while multiplication is easy to toss into SSE, and pretty fast even if you don't.
Even if np.power were smart enough to do integer exponentiation separately, unless it unrolled small values into repeated multiplication, it still wouldn't be nearly as fast.
You can verify this pretty easily by comparing the time for int-to-int, int-to-float, float-to-int, and float-to-float powers vs. multiplication for a small array; int-to-int is about 5x as fast as the others—but still 4x slower than multiplication (although I tested with PyPy with a customized NumPy, so it's probably better for someone with the normal NumPy installed on CPython to give real results…)
The performance of numpys power function scales very non-linearly with the exponent. Constrast this with the naive approach which does. The same type of scaling should exist, regardless of matrix size. Basically, unless the exponent is sufficiently large, you aren't going to see any tangible benefit.
import matplotlib.pyplot as plt
import numpy as np
import functools
import time
def timeit(func):
#functools.wraps(func)
def newfunc(*args, **kwargs):
startTime = time.time()
res = func(*args, **kwargs)
elapsedTime = time.time() - startTime
return (res, elapsedTime)
return newfunc
#timeit
def naive_power(m, n):
m = np.asarray(m)
res = m.copy()
for i in xrange(1,n):
res *= m
return res
#timeit
def fast_power(m, n):
# elementwise power
return np.power(m, n)
m = np.random.random((100,100))
n = 400
rs1 = []
ts1 = []
ts2 = []
for i in xrange(1, n):
r1, t1 = naive_power(m, i)
ts1.append(t1)
for i in xrange(1, n):
r2, t2 = fast_power(m, i)
ts2.append(t2)
plt.plot(ts1, label='naive')
plt.plot(ts2, label='numpy')
plt.xlabel('exponent')
plt.ylabel('time')
plt.legend(loc='upper left')

Categories