Numerical integration for matrix values in Python - python

I am trying to integrate over some matrix entries in Python. I want to avoid loops, because my tasks includes 1 Mio simulations. I am looking for a specification that will efficiently solve my problem.
I get the following error: only size-1 arrays can be converted to Python scalars
from scipy import integrate
import numpy.random as npr
n = 1000
m = 30
x = npr.standard_normal([n, m])
def integrand(k):
return k * x ** 2
integrate.quad(integrand, 0, 100)
This is a simplied example of my case. I have multiple nested functions, such that I cannot simple put x infront of the integral.

Well you might want to use parallel execution for this. It should be quite easy as long as you just want to execute integrate.quad 30000000 times. Just split your workload in little packages and give it to a threadpool. Of course the speedup is limited to the number of cores you have in your pc. I'm not a python programer but this should be possible. You can also increase epsabs and epsrel parameters in the quad function, depending on the implemetation this should speed up the programm as well. Of course you'll get a less precise result but this might be ok depending on your problem.
import threading
from scipy import integrate
import numpy.random as npr
n = 2
m = 3
x = npr.standard_normal([n,m])
def f(a):
for j in range(m):
integrand = lambda k: k * x[a,j]**2
i =integrate.quad(integrand, 0, 100)
print(i) ##write it to result array
for i in range(n):
threading.Thread(target=f(i)).start();
##better split it up even more and give it to a threadpool to avoid
##overhead because of thread init

This is maybe not the ideal solution but it should help a bit. You can use numpy.vectorize. Even the doc says: The vectorize function is provided primarily for convenience, not for performance. The implementation is essentially a for loop. But still, a %timeit on the simple example you provided shows a 2.3x speedup.
The implementation is
from scipy import integrate
from numpy import vectorize
import numpy.random as npr
n = 1000
m = 30
x = npr.standard_normal([n,m])
def g(x):
integrand = lambda k: k * x**2
return integrate.quad(integrand, 0, 100)
vg = vectorize(g)
res = vg(x)

quadpy (a project of mine) does vectorized quadrature:
import numpy
import numpy.random as npr
import quadpy
x = npr.standard_normal([1000, 30])
def integrand(k):
return numpy.multiply.outer(x ** 2, k)
scheme = quadpy.line_segment.gauss_legendre(10)
val = scheme.integrate(integrand, [0, 100])
This is much faster than all other answers.

Related

Is there any simple method to parallel np.einsum?

I would like to know, is there any simple method to parallel einsum in Numpy?
I found some discussions
Numpy np.einsum array multiplication using multiple cores
Any chance of making this faster? (numpy.einsum)
numpy.tensordot() only for binary contraction with a single axis, Numba needs to specify certain loops. Is there any simple and robust approach to parallel einsum (possibly including opt-einsum, tf-einsum etc) with arbitrary contractions?
A sample code is as following (if necessary I can use more complicated contraction as the example)
import numpy as np
import timeit
import time
na = nc = 1000
nb = 1000
n_iter = 10
A = np.random.random((na,nb))
B = np.random.random((nb,nc))
t_total = 0.
for i in range(n_iter):
start = time.time()
C = np.einsum('ij,jk->ik', A, B)
end = time.time()
t_total += end - start
print('AB->C',(t_total)/n_iter)

Numeric Integration Python versus Matlab

My python code takes about 6.2 seconds to run. The Matlab code runs in under 0.05 seconds. Why is this and what can I do to speed up the Python code? Is Cython the solution?
Matlab:
function X=Test
nIter=1000000;
Step=.001;
X0=1;
X=zeros(1,nIter+1); X(1)=X0;
tic
for i=1:nIter
X(i+1)=X(i)+Step*(X(i)^2*cos(i*Step+X(i)));
end
toc
figure(1) plot(0:nIter,X)
Python:
nIter = 1000000
Step = .001
x = np.zeros(1+nIter)
x[0] = 1
start = time.time()
for i in range(1,1+nIter):
x[i] = x[i-1] + Step*x[i-1]**2*np.cos(Step*(i-1)+x[i-1])
end = time.time()
print(end - start)
How to speed up your Python code
Your largest time sink is np.cos which performs several checks on the format of the input.
These are relevant and usually negligible for high-dimensional inputs, but for your one-dimensional input, this becomes the bottleneck.
The solution to this is to use math.cos, which only accepts one-dimensional numbers as input and thus is faster (though less flexible).
Another time sink is indexing x multiple times.
You can speed this up by having one state variable which you update and only writing to x once per iteration.
With all of this, you can speed up things by a factor of roughly ten:
import numpy as np
from math import cos
nIter = 1000000
Step = .001
x = np.zeros(1+nIter)
state = x[0] = 1
for i in range(nIter):
state += Step*state**2*cos(Step*i+state)
x[i+1] = state
Now, your main problem is that your truly innermost loop happens completely in Python, i.e., you have a lot of wrapping operations that eat up time.
You can avoid this by using uFuncs (e.g., created with SymPy’s ufuncify) and using NumPy’s accumulate:
import numpy as np
from sympy.utilities.autowrap import ufuncify
from sympy.abc import t,y
from sympy import cos
nIter = 1000000
Step = 0.001
state = x[0] = 1
f = ufuncify([y,t],y+Step*y**2*cos(t+y))
times = np.arange(0,nIter*Step,Step)
times[0] = 1
x = f.accumulate(times)
This runs practically within an instant.
… and why that’s not what you should worry about
If your exact code (and only that) is what you care about, then you shouldn’t worry about runtime anyway, because it’s very short either way.
If on the other hand, you use this to gauge efficiency for problems with a considerable runtime, your example will fail because it considers only one initial condition and is a very simple dynamics.
Moreover, you are using the Euler method, which is either not very efficient or robust, depending on your step size.
The latter (Step) is absurdly low in your case, yielding much more data than you probably need:
With a step size of 1, You can see what’s going on just fine.
If you want a robust integration in such cases, it’s almost always best to use a modern adaptive integrator, that can adjust its step size itself, e.g., here is a solution to your problem using a native Python integrator:
from math import cos
import numpy as np
from scipy.integrate import solve_ivp
T = 1000
dt = 0.001
x = solve_ivp(
lambda t,state: state**2*cos(t+state),
t_span = (0,T),
t_eval = np.arange(0,T,dt),
y0 = [1],
rtol = 1e-5
).y
This automatically adjusts the step size to something higher, depending on the error tolerance rtol.
It still returns the same amount of output data, but that’s via interpolation of the solution.
It runs in 0.3 s for me.
How to speed up things in a scalable manner
If you still need to speed up something like this, chances are that your derivative (f) is considerably more complex than in your example and thus it is the bottleneck.
Depending on your problem, you may be able to vectorise its calcultion (using NumPy or similar).
If you can’t vectorise, I wrote a module that specifically focusses on this by hard-coding your derivative under the hood.
Here is your example in with a sampling step of 1.
import numpy as np
from jitcode import jitcode,y,t
from symengine import cos
T = 1000
dt = 1
ODE = jitcode([y(0)**2*cos(t+y(0))])
ODE.set_initial_value([1])
ODE.set_integrator("dop853")
x = np.hstack([ODE.integrate(t) for t in np.arange(0,T,dt)])
This runs again within an instant. While this may not be a relevant speed boost here, this is scalable to huge systems.
The difference is jit-compilation, which Matlab uses per default. Let's try your example with Numba(a Python jit-compiler)
Code
import numba as nb
import numpy as np
import time
nIter = 1000000
Step = .001
#nb.njit()
def integrate(nIter,Step):
x = np.zeros(1+nIter)
x[0] = 1
for i in range(1,1+nIter):
x[i] = x[i-1] + Step*x[i-1]**2*np.cos(Step*(i-1)+x[i-1])
return x
#Avoid measuring the compilation time,
#this would be also recommendable for Matlab to have a fair comparison
res=integrate(nIter,Step)
start = time.time()
for i in range(100):
res=integrate(nIter,Step)
end=time.time()
print((end - start)/100)
This results in 0.022s runtime per call.

Solving nonlinear overdetermined system using python

I'm trying to find a good way to solve a nonlinear overdetermined system with python. I looked into optimization tools here http://docs.scipy.org/doc/scipy/reference/optimize.nonlin.html but I can't figure out how to use them. What I have so far is
#overdetermined nonlinear system that I'll be using
'''
a = cos(x)*cos(y)
b = cos(x)*sin(y)
c = -sin(y)
d = sin(z)*sin(y)*sin(x) + cos(z)*cos(y)
e = cos(x)*sin(z)
f = cos(z)*sin(x)*cos(z) + sin(z)*sin(x)
g = cos(z)*sin(x)*sin(y) - sin(z)*cos(y)
h = cos(x)*cos(z)
a-h will be random int values in the range 0-10 inclusive
'''
import math
from random import randint
import scipy.optimize
def system(p):
x, y, z = p
return(math.cos(x)*math.cos(y)-randint(0,10),
math.cos(x)*math.sin(y)-randint(0,10),
-math.sin(y)-randint(0,10),
math.sin(z)*math.sin(y)*math.sin(x)+math.cos(z)*math.cos(y)-randint(0,10),
math.cos(x)*math.sin(z)-randint(0,10),
math.cos(z)*math.sin(x)*math.cos(z)+math.sin(z)*math.sin(x)-randint(0,10),
math.cos(z)*math.sin(x)*math.sin(y)-math.sin(z)*math.cos(y)-randint(0,10),
math.cos(x)*math.cos(z)-randint(0,10))
x = scipy.optimize.broyden1(system, [1,1,1], f_tol=1e-14)
could you help me out a bit here?
If I understand you right, you want to find an approximate solution to the non-linear system of equations f(x) = b where b is the vector containing the random values b=[a,...,h].
In order to do this you will first need to remove the random values from the system function, because otherwise in each iteration the solver will try to solve a different equation system. Moreover, I think that the basic Broyden method only works for a system with as many unknowns as equations. Alternatively you could use scipy.optimize.leastsq. A possible solution looks like this:
# I am using numpy because it's more convenient for the generation of
# random numbers.
import numpy as np
from numpy.random import randint
import scipy.optimize
# Removed random right-hand side values and changed nomenclature a bit.
def f(x):
x1, x2, x3 = x
return np.asarray((math.cos(x1)*math.cos(x2),
math.cos(x1)*math.sin(x2),
-math.sin(x2),
math.sin(x3)*math.sin(x2)*math.sin(x1)+math.cos(x3)*math.cos(x2),
math.cos(x1)*math.sin(x3),
math.cos(x3)*math.sin(x1)*math.cos(x3)+math.sin(x3)*math.sin(x1),
math.cos(x3)*math.sin(x1)*math.sin(x2)-math.sin(x3)*math.cos(x2),
math.cos(x1)*math.cos(x3)))
# The second parameter is used to set the solution vector using the args
# argument of leastsq.
def system(x,b):
return (f(x)-b)
b = randint(0, 10, size=8)
x = scipy.optimize.leastsq(system, np.asarray((1,1,1)), args=b)[0]
I hope this is of help for you. However, note that it is extremely unlikely that you will find a solution, especially when you generate random integers in the interval [0,10] while the range of f is limited to [-2,2]

Why is numpy.power 60x slower than in-lining?

Maybe I'm doing something odd, but maybe found a surprising performance loss when using numpy, seems consistent regardless of the power used. For instance when x is a random 100x100 array
x = numpy.power(x,3)
is about 60x slower than
x = x*x*x
A plot of the speed up for various array sizes reveals a sweet spot with arrays around size 10k and a consistent 5-10x speed up for other sizes.
Code to test below on your own machine (a little messy):
import numpy as np
from matplotlib import pyplot as plt
from time import time
ratios = []
sizes = []
for n in np.logspace(1,3,20).astype(int):
a = np.random.randn(n,n)
inline_times = []
for i in range(100):
t = time()
b = a*a*a
inline_times.append(time()-t)
inline_time = np.mean(inline_times)
pow_times = []
for i in range(100):
t = time()
b = np.power(a,3)
pow_times.append(time()-t)
pow_time = np.mean(pow_times)
sizes.append(a.size)
ratios.append(pow_time/inline_time)
plt.plot(sizes,ratios)
plt.title('Performance of inline vs numpy.power')
plt.ylabel('Nx speed-up using inline')
plt.xlabel('Array size')
plt.xscale('log')
plt.show()
Anyone have an explanation?
It's well known that multiplication of doubles, which your processor can do in a very fancy way, is very, very fast. pow is decidedly slower.
Some performance guides out there even advise people to plan for this, perhaps even in some way that might be a bit overzealous at times.
numpy special-cases squaring to make sure it's not too, too slow, but it sends cubing right off to your libc's pow, which isn't nearly as fast as a couple multiplications.
I suspect the issue is that np.power always does float exponentiation, and it doesn't know how to optimize or vectorize that on your platform (or, probably, most/all platforms), while multiplication is easy to toss into SSE, and pretty fast even if you don't.
Even if np.power were smart enough to do integer exponentiation separately, unless it unrolled small values into repeated multiplication, it still wouldn't be nearly as fast.
You can verify this pretty easily by comparing the time for int-to-int, int-to-float, float-to-int, and float-to-float powers vs. multiplication for a small array; int-to-int is about 5x as fast as the others—but still 4x slower than multiplication (although I tested with PyPy with a customized NumPy, so it's probably better for someone with the normal NumPy installed on CPython to give real results…)
The performance of numpys power function scales very non-linearly with the exponent. Constrast this with the naive approach which does. The same type of scaling should exist, regardless of matrix size. Basically, unless the exponent is sufficiently large, you aren't going to see any tangible benefit.
import matplotlib.pyplot as plt
import numpy as np
import functools
import time
def timeit(func):
#functools.wraps(func)
def newfunc(*args, **kwargs):
startTime = time.time()
res = func(*args, **kwargs)
elapsedTime = time.time() - startTime
return (res, elapsedTime)
return newfunc
#timeit
def naive_power(m, n):
m = np.asarray(m)
res = m.copy()
for i in xrange(1,n):
res *= m
return res
#timeit
def fast_power(m, n):
# elementwise power
return np.power(m, n)
m = np.random.random((100,100))
n = 400
rs1 = []
ts1 = []
ts2 = []
for i in xrange(1, n):
r1, t1 = naive_power(m, i)
ts1.append(t1)
for i in xrange(1, n):
r2, t2 = fast_power(m, i)
ts2.append(t2)
plt.plot(ts1, label='naive')
plt.plot(ts2, label='numpy')
plt.xlabel('exponent')
plt.ylabel('time')
plt.legend(loc='upper left')

SciPy: generating custom random variable from PMF

I'm trying to generate random variables according to a certain ugly distribution, in Python. I have an explicit expression for the PMF, but it involves some products which makes it unpleasant to obtain and invert the CDF (see below code for explicit form of PMF).
In essence, I'm trying to define a random variable in Python by its PMF and then have built-in code do the hard work of sampling from the distribution. I know how to do this if the support of the RV is finite, but here the support is countably infinite.
The code I am currently trying to run as per #askewchan's advice below is:
import scipy as sp
import numpy as np
class x_gen(sp.stats.rv_discrete):
def _pmf(self,k,param):
num = np.arange(1+param, k+param, 1)
denom = np.arange(3+2*param, k+3+2*param, 1)
p = (2+param)*(np.prod(num)/np.prod(denom))
return p
pa_limit = limitrv_gen()
print pa_limit.rvs(alpha,n=1)
However, this returns the error while running:
File "limiting_sim.py", line 42, in _pmf
num = np.arange(1+param, k+param, 1)
TypeError: only length-1 arrays can be converted to Python scalars
Basically, it seems that the np.arange() list isn't working somehow inside the def _pmf() function. I'm at a loss to see why. Can anyone enlighten me here and/or point out a fix?
EDIT 1: cleared up some questions by askewchan, edits reflected above.
EDIT 2: askewchan suggested an interesting approximation using the factorial function, but I'm looking more for an exact solution such as the one that I'm trying to get work with np.arange.
You should be able to subclass rv_discrete like so:
class mydist_gen(rv_discrete):
def _pmf(self, n, param):
return yourpmf(n, param)
Then you can create a distribution instance with:
mydist = mydist_gen()
And generate samples with:
mydist.rvs(param, size=1000)
Or you can then create a frozen distribution object with:
mydistp = mydist(param)
And finally generate samples with:
mydistp.rvs(1000)
With your example, this should work, since factorial automatically broadcasts. But, it might fail for large enough alpha:
import scipy as sp
import numpy as np
from scipy.misc import factorial
class limitrv_gen(sp.stats.rv_discrete):
def _pmf(self, k, alpha):
#num = np.prod(np.arange(1+alpha, k+alpha))
num = factorial(k+alpha-1) / factorial(alpha)
#denom = np.prod(np.arange(3+2*alpha, k+3+2*alpha))
denom = factorial(k + 2 + 2*alpha) / factorial(2 + 2*alpha)
return (2+alpha) * num / denom
pa_limit = limitrv_gen()
alpha = 100
pa_limit.rvs(alpha, size=10)

Categories