Python efficient vectorization for Monte Carlo based Pi calculation - python

For approximating the value of Pi consider this stochastic method that populates an array with random values and tests for unit circle inclusion,
import random as rd
import numpy as np
def r(_): return rd.random()
def np_pi(n):
v_r = np.vectorize(r)
x = v_r(np.zeros(n))
y = v_r(np.zeros(n))
return sum (x*x + y*y <= 1) * 4. / n
Note the random number generation relies on Python standard library; consider though numpy random generation,
def np_pi(n):
x = np.random.random(n)
y = np.random.random(n)
return sum (x*x + y*y <= 1) * 4. / n
Consider now the non-vectorized approach,
import random as rd
def dart_board():
x,y = rd.random(), rd.random()
return (x*x + y*y <= 1)
def pi(n):
s = sum([dart_board() for _ in range(n)])
return s * 4. / n
The non-vectorized form proves 4 times faster in average than the vectorized counterpart, for instance consider n = 5000000 and OS command line as follows (Python 2.7, Quadcore, 8GB RAM, RedHat Linux),
time python pi.py
time python np_pi.py
Thus to ask how to improve the vectorized approach to improve its performance.

You are invoking the python builtin sum, rather than numpy's vectorized method sum:
import numpy as np
import random as rd
def np_pi(n):
x = np.random.random(n)
y = np.random.random(n)
return (x*x + y*y <= 1).sum()
def dart_board():
x,y = rd.random(), rd.random()
return (x*x + y*y <= 1)
def pi(n):
s = sum([dart_board() for _ in range(n)])
Timing results are now much different:
In [12]: %timeit np_pi(10000)
1000 loops, best of 3: 250 us per loop
In [13]: %timeit pi(10000)
100 loops, best of 3: 3.54 ms per loop
It is my guess that calling the builtin sum on a numpy-array causes overhead by iterating over the array, rather than using vectorized routines.

Related

Python Numba jit function with if statement

I have a piecewise function with 3 parts that I'm trying to write in Python using Numba #jit instruction. The function is calculated over an array. The function is defined by:
#njit(parallel=True)
def f(x_vec):
N=len(x_vec)
y_vec=np.zeros(N)
for i in prange(N):
x=x_vec[i]
if x<=2000:
y=64/x
elif x>=4000:
y=np.log(x)
else:
y=np.log(1.2*x)
y_vec[i]=y
return y_vec
I'm using Numba to make this code very fast and run it on all 8 threads of my CPU.
Now, my question is, if I wanted to define each part of the function separately as f1, f2 and f3, and put those inside the if statements (and still benefit from Numba speed), how can I do that? The reason is that the subfunctions can be more complicated and I don't want to make my code hard to read. I want it to be as fast as this one (or slightly slower but not alot).
In order to test the function, we can use this array:
Np=10000000
x_vec=100*np.power(1e8/100,np.random.rand(Np))
%timeit f(x_vec) #0.06sec on intel core i7 3610
For completionism, the following libraries are called:
import numpy as np
from numba import njit, prange
So in this case, the functions would be:
def f1(x):
return 64/x
def f2(x):
return np.log(x)
def f3(x):
return np.log(1.2*x)
The actual functions are these, which are for smooth pipe friction factor for laminar, transition and turbulent regimes:
#njit
def f1(x):
return 64/x
#njit
def f2(x):
#x is the Reynolds number(Re), y is the Darcy friction(f)
#for transition, we can assume Re=4000 (max possible friction)
y=0.02
y=(-2/np.log(10))*np.log(2.51/(4000*np.sqrt(y)))
return 1/(y*y)
#njit
def f3(x): #colebrook-white approximation
#x is the Reynolds number(Re), y is the Darcy friction(f)
y=0.02
y=(-2/np.log(10))*np.log(2.51/(x*np.sqrt(y)))
return 1/(y*y)
Thanks for contributions from everyone. This is the numpy solution (the last tree lines are slow for some reason, but doesn't need warmup):
y = np.empty_like(x_vec)
a1=np.where(x_vec<=2000,True,False)
a3=np.where(x_vec>=4000,True,False)
a2=~(a1 | a3)
y[a1] = f1(x_vec[a1])
y[a2] = f2(x_vec[a2])
y[a3] = f3(x_vec[a3])
The fastest Numba solution, allowing for passing function names and taking advantage of prange (but hindered by jit warmup) is this, which can be as fast as the first solution (top of the question):
#njit(parallel=True)
def f(x_vec,f1,f2,f3):
N = len(x_vec)
y_vec = np.zeros(N)
for i in prange(N):
x=x_vec[i]
if x<=2000:
y=f1(x)
elif x>=4000:
y=f3(x)
else:
y=f2(x)
y_vec[i]=y
return y_vec
You can write f() to accept function parameters, e.g.:
#njit
def f(arr, f1, f2, f3):
N = len(arr)
y_vec = np.zeros(N)
for i in range(N):
x = x_vec[i]
if x <= 2000:
y = f1(x)
elif x >= 4000:
y = f2(x)
else:
y = f3(x)
y_vec[i] = y
return y_vec
Make sure that the function you pass are Numba compatible.
Is this too slow? This can be done in pure numpy, by avoiding loops and using masks for indexing:
def f(x):
y = np.empty_like(x)
mask = x <= 2000
y[mask] = 64 / x[mask]
mask = (x > 2000) & (x < 4000)
y[mask] = np.log(1.2 * x[mask])
mask = x >= 4000
y[mask] = np.log(x[mask])
return y
You can also run the "else" case by first applying the middle part without any mask to the whole array, it's probably a bit slower:
def f_else(x):
y = np.log(1.2 * x)
mask = x <= 2000
y[mask] = 64 / x[mask]
mask = x >= 4000
y[mask] = np.log(x[mask])
return y
With
Np=10000000
x_vec=100*np.power(1e8/100,np.random.rand(Np))
I get (laptop with i7-8850H with 6 + 6VT cores)
f1: 1 loop, best of 5: 294 ms per loop
f_else: 1 loop, best of 5: 400 ms per loop
If your intended subfunctions are mainly numpy-operations this will still be fast.

Iterative Binomial Update without Loop

Can this be done without a loop?
import numpy as np
n = 10
x = np.random.random(n+1)
a, b = 0.45, 0.55
for i in range(n):
x = a*x[:-1] + b*x[1:]
I came across this setup in another question. There it was a covered by a little obscure nomenclature. I guess it is related to Binomial options pricing model but don't quite understand the topic to be honest. I just was intrigued by the formula and this iterative update / shrinking of x and wondered if it can be done without a loop. But I can not wrap my head around it and I am not sure if this is even possible.
What makes me think that it might work is that this vatiaton
n = 10
a, b = 0.301201, 0.59692
x0 = 123
x = x0
for i in range(n):
x = a*x + b*x
# ~42
is actually just x0*(a + b)**n
print(np.allclose(x, x0*(a + b)**n))
# True
You are calculating:
sum( a ** (n - i) * b ** i * x[i] * choose(n, i) for 0 <= i <= n)
[That's meant to be pseudocode, not Python.] I'm not sure of the best way to convert that into Numpy.
choose(n, i) is n!/ (i! (n-i)!), not the numpy choose function.
Using #mathfux's comment, one can do
import numpy as np
from scipy.stats import binom
binomial = binom(p=p, n=n)
pmf = binomial(np.arange(n+1))
res = np.sum(x * pmf)
So
res = x.copy()
for i in range(n):
res = p*res[1:] + (p-1)*res[:-1]
is just the expected value of a binomial distributed random variable x.

How I can improve a loop for in a Python code?

I am translating this code from Matlab to Python. The code function fine but it is painfully slow in python. In Matlab, the code runs in way less then a minute, in python it took 30 min!!! Someone with mode experience in python could help me?
# P({ai})
somai = 0
for i in range(1, n):
somaj = 0
for j in range(1, n):
exponencial = math.exp(-((a[i] - a[j]) * (a[i] - a[j])) / dev_a2 - ((b[i] - b[j]) * (b[i] - b[j])) / dev_b2)
somaj = somaj + exponencial
somai = somai + somaj
As with MATLAB, I'd recommend you vectorize your code. Iterating by for-loops can be much slower than the lower level implementation of MATLAB and numpy.
Your operations (a[i] - a[j])*(a[i] - a[j]) are pairwise squared-Euclidean distance for all N data points. You can calculate a pairwise distance matrix using scipy's pdist and squareform functions -- pdist, squareform.
Then you calculate the difference between pairwise distance matrices A and B, and sum the exponential decay. So you could get a vectorized code like:
import numpy as np
from scipy.spatial.distance import pdist
from scipy.spatial.distance import squareform
# Example data
N = 1000
a = np.random.rand(N,1)
b = np.random.rand(N,1)
dev_a2 = np.random.rand()
dev_b2 = np.random.rand()
# `a` is an [N,1] matrix (i.e. column vector)
A = pdist(a, 'sqeuclidean')
# Change to pairwise distance matrix
A = squareform(A)
# Divide all elements by same divisor
A = A / dev_a2
# Then do the same for `b`'s
# `b` is an [N,1] matrix (i.e. column vector)
B = pdist(b, 'sqeuclidean')
B = squareform(B)
B = B / dev_b2
# Calculate exponential decay
expo = np.exp(-(A-B))
# Sum all elements
total = np.sum(expo)
Here's a quick timing comparison between the iterative method and this vectorized code.
N: 1000 | Iter Output: 2729989.851117 | Vect Output: 2732194.924364
Iter time: 6.759 secs | Vect time: 0.031 secs
N: 5000 | Iter Output: 24855530.997400 | Vect Output: 24864471.007726
Iter time: 171.795 secs | Vect time: 0.784 secs
Note that the final results are not exactly the same. I'm not sure why this is, it might be rounding error or math error on my part, but I'll leave that to you.
TLDR
Use numpy
Why Numpy?
Python, by default, is slow. One of the powers of python is that it plays nicely with C and has tons of libraries. The one that will help you hear is numpy. Numpy is mostly implemented in C and, when used properly, is blazing fast. The trick is to phrase the code in such a way that you keep the execution inside numpy and outside of python proper.
Code and Results
import math
import numpy as np
n = 1000
np_a = np.random.rand(n)
a = list(np_a)
np_b = np.random.rand(n)
b = list(np_b)
dev_a2, dev_b2 = (1, 1)
def old():
somai = 0.0
for i in range(0, n):
somaj = 0.0
for j in range(0, n):
tmp_1 = -((a[i] - a[j]) * (a[i] - a[j])) / dev_a2
tmp_2 = -((b[i] - b[j]) * (b[i] - b[j])) / dev_b2
exponencial = math.exp(tmp_1 + tmp_2)
somaj += exponencial
somai += somaj
return somai
def new():
tmp_1 = -np.square(np.subtract.outer(np_a, np_a)) / dev_a2
tmp_2 = -np.square(np.subtract.outer(np_b, np_b)) / dev_a2
exponential = np.exp(tmp_1 + tmp_2)
somai = np.sum(exponential)
return somai
old = 1.76 s ± 48.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
new = 24.6 ms ± 66.1 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
This is about a 70x improvement
old yields 740919.6020840995
new yields 740919.602084099
Explanation
You'll notice I broke up your code with the tmp_1 and tmp_2 a bit for clarity.
np.random.rand(n): This creates an array of length n that has random floats going from 0 to 1 (excluding 1) (documented here).
np.subtract.outer(a, b): Numpy has modules for all the operators that allow you do various things with them. Lets say you had np_a = [1, 2, 3], np.subtract.outer(np_a, np_a) would yield
array([[ 0, -1, -2],
[ 1, 0, -1],
[ 2, 1, 0]])
Here's a stackoverflow link if you want to go deeper on this. (also the word "outer" comes from "outer product" like from linear algebra)
np.square: simply squares every element in the matrix.
/: In numpy when you do arithmetic operators between scalars and matrices it does the appropriate thing and applies that operation to every element in the matrix.
np.exp: like np.square
np.sum: sums every element together and returns a scalar.

Summation Evaluation in python

Given
z = np.linspace(1,10,100)
Calculate Summation over all values of z in z^k * exp((-z^2)/ 2)
import numpy as np
import math
def calc_Summation1(z, k):
ans = 0.0
for i in range(0, len(z)):`
ans += math.pow(z[i], k) * math.exp(math.pow(-z[i], 2) / 2)
return ans
def calc_Summation2(z,k):
part1 = z**k
part2 = math.exp(-z**2 / 2)
return np.dot(part1, part2.transpose())
Can someone tell me what is wrong with both calc_Summation1 and calc_Summation2?
I think this might be what you're looking for:
sum(z_i**k * math.exp(-z_i**2 / 2) for z_i in z)
If you want to vectorize calculations with numpy, you need to use numpy's ufuncs. Also, the usual way of doing you calculation would be:
import numpy as np
calc = np.sum(z**k * np.exp(-z*z / 2))
although you can keep your approach using np.dot if you call np.exp instead of math.exp:
calc = np.dot(z**k, np.exp(-z*z / 2))
It does run faster with dot:
In [1]: z = np.random.rand(1000)
In [2]: %timeit np.sum(z**5 * np.exp(-z*z / 2))
10000 loops, best of 3: 142 µs per loop
In [3]: %timeit np.dot(z**5, np.exp(-z*z / 2))
1000 loops, best of 3: 129 µs per loop
In [4]: np.allclose(np.sum(z**5 * np.exp(-z*z / 2)),
... np.dot(z**5, np.exp(-z*z / 2)))
Out[4]: True
k=1
def myfun(z_i):
return z_i**k * math.exp(-z_i**2 / 2)
sum(map(myfun,z))
We define a function for the thing we want to sum, use the map function to apply it to each value in the list and then sum all these values. Having to use an external variable k is slightly niggling.
A refinement would be to define a two argument function
def myfun2(z_i,k):
return z_i**k * math.exp(-z_i**2 / 2)
and use a lambda expression to evaluate it
sum(map(lambda x:myfun2(x,1), z))

Slow computation: could itertools.product be the culprit?

A numerical integration is taking exponentially longer than I expect it to. I would like to know if the way that I implement the iteration over the mesh could be a contributing factor. My code looks like this:
import numpy as np
import itertools as it
U = np.linspace(0, 2*np.pi)
V = np.linspace(0, np.pi)
for (u, v) in it.product(U,V):
# values = computation on each grid point, does not call any outside functions
# solution = sum(values)
return solution
I left out the computations because they are long and my question is specifically about the way that I have implemented the computation over the parameter space (u, v). I know of alternatives such as numpy.meshgrid; however, these all seem to create instances of (very large) matrices, and I would guess that storing them in memory would slow things down.
Is there an alternative to it.product that would speed up my program, or should I be looking elsewhere for the bottleneck?
Edit: Here is the for loop in question (to see if it can be vectorized).
import random
import numpy as np
import itertools as it
##########################################################################
# Initialize the inputs with random (to save space)
##########################################################################
mat1 = np.array([[random.random() for i in range(3)] for i in range(3)])
mat2 = np.array([[random.random() for i in range(3)] for i in range(3)])
a1, a2, a3 = np.array([random.random() for i in range(3)])
plane_normal = np.array([random.random() for i in range(3)])
plane_point = np.array([random.random() for i in range(3)])
d = np.dot(plane_normal, plane_point)
truthval = True
##########################################################################
# Initialize the loop
##########################################################################
N = 100
U = np.linspace(0, 2*np.pi, N + 1, endpoint = False)
V = np.linspace(0, np.pi, N + 1, endpoint = False)
U = U[1:N+1] V = V[1:N+1]
Vsum = 0
Usum = 0
##########################################################################
# The for loops starts here
##########################################################################
for (u, v) in it.product(U,V):
cart_point = np.array([a1*np.cos(u)*np.sin(v),
a2*np.sin(u)*np.sin(v),
a3*np.cos(v)])
surf_normal = np.array(
[2*x / a**2 for (x, a) in zip(cart_point, [a1,a2,a3])])
differential_area = \
np.sqrt((a1*a2*np.cos(v)*np.sin(v))**2 + \
a3**2*np.sin(v)**4 * \
((a2*np.cos(u))**2 + (a1*np.sin(u))**2)) * \
(np.pi**2 / (2*N**2))
if (np.dot(plane_normal, cart_point) - d > 0) == truthval:
perp_normal = plane_normal
f = np.dot(np.dot(mat2, surf_normal), perp_normal)
Vsum += f*differential_area
else:
perp_normal = - plane_normal
f = np.dot(np.dot(mat2, surf_normal), perp_normal)
Usum += f*differential_area
integral = abs(Vsum) + abs(Usum)
If U.shape == (nu,) and (V.shape == (nv,), then the following arrays vectorize most of your calculations. With numpy you get the best speed by using arrays for the largest dimensions, and looping on the small ones (e.g. 3x3).
Corrected version
A = np.cos(U)[:,None]*np.sin(V)
B = np.sin(U)[:,None]*np.sin(V)
C = np.repeat(np.cos(V)[None,:],U.size,0)
CP = np.dstack([a1*A, a2*B, a3*C])
SN = np.dstack([2*A/a1, 2*B/a2, 2*C/a3])
DA1 = (a1*a2*np.cos(V)*np.sin(V))**2
DA2 = a3*a3*np.sin(V)**4
DA3 = (a2*np.cos(U))**2 + (a1*np.sin(U))**2
DA = DA1 + DA2 * DA3[:,None]
DA = np.sqrt(DA)*(np.pi**2 / (2*Nu*Nv))
D = np.dot(CP, plane_normal)
S = np.sign(D-d)
F1 = np.dot(np.dot(SN, mat2.T), plane_normal)
F = F1 * DA
#F = F * S # apply sign
Vsum = F[S>0].sum()
Usum = F[S<=0].sum()
With the same random values, this produces the same values. On a 100x100 case, it is 10x faster. It's been fun playing with these matrices after a year.
In ipython I did simple sum calculations on your 50 x 50 gridspace
In [31]: sum(u*v for (u,v) in it.product(U,V))
Out[31]: 12337.005501361698
In [33]: UU,VV = np.meshgrid(U,V); sum(sum(UU*VV))
Out[33]: 12337.005501361693
In [34]: timeit UU,VV = np.meshgrid(U,V); sum(sum(UU*VV))
1000 loops, best of 3: 293 us per loop
In [35]: timeit sum(u*v for (u,v) in it.product(U,V))
100 loops, best of 3: 2.95 ms per loop
In [38]: timeit list(it.product(U,V))
1000 loops, best of 3: 213 us per loop
In [45]: timeit UU,VV = np.meshgrid(U,V); (UU*VV).sum().sum()
10000 loops, best of 3: 70.3 us per loop
# using numpy's own sum is even better
product is slower (by factor 10), not because product itself is slow, but because of the point by point calculation. If you can vectorize your calculations so they use the 2 (50,50) arrays (without any sort of looping) it should speed up the overall time. That's the main reason for using numpy.
[k for k in it.product(U,V)] runs in 2ms for me, and the itertool package is made to be efficient, e.g. it does not create a long array first (http://docs.python.org/2/library/itertools.html).
The culprit seems to be your code inside the iteration, or your using a lot of points in linspace.

Categories