Order of operations and fast computation in Python - python

I am trying to optimize a Python code where the following mathematical function has to be evaluated many (many) times
from math import pow, pi
from scipy.special import gamma
def norm_spirals(a,k):
return pi*pow(abs(gamma(0.25+0.5*k+0.5j*a)/gamma(0.75+0.5*k+0.5j*a)),2)
I have already optimized as much as possible my code using cProfile and timeit. I also got rid of the call to the procedure and direcly embedded this calculation in the code. The last step of optimization I could think of, is to tune the order of the mathematical operations in order to accelerate the evaluation. The formula above is the fastest form that I have been able to obtain using timeit.
Would you think of another order of calculation for this formula that could be faster ?
Would you have any ideas of other optimizations I could use ?
Thank you in advance for your help !

Here are four simple transformations to speed-up your code:
1) Replace pow(x, 2) with x ** 2.0:
def norm_spirals(a,k):
return pi* abs(gamma(0.25+0.5*k+0.5j*a)/gamma(0.75+0.5*k+0.5j*a)) ** 2.0
2) Factor-out the common subexpression: 0.25+0.5*k+0.5j*a
def norm_spirals(a,k):
se = 0.25 + 0.5*k + 0.5j*a
return pi * abs(gamma(se)/gamma(se + 0.5)) ** 2.0
3) Localize the global lookups:
def norm_spirals(a,k, pi=pi, abs=abs, gamma=gamma):
se = 0.25 +0.5*k + 0.5j*a
return pi * abs(gamma(se)/gamma(se + 0.5)) ** 2.0
4) Replace the square operation with b * b. This is called strength reduction:
def norm_spirals(a,k, pi=pi, abs=abs, gamma=gamma):
se = 0.25 + 0.5*k + 0.5j * a
b = abs(gamma(se)/gamma(se + 0.5))
return b * b * pi

I get it 15% faster (python 2.7) by introducing a temp var and remove the call to pow:
def norm_spirals(a, k):
_ = 0.5 * k + 0.5j * a
return pi * abs(gamma(0.25 + _) / gamma(0.75 + _)) ** 2
For python 3 the gain was just 11%.
Disclaimer:
I didn't have scipy installed so for gamma I used
def gamma(a):
return a
So "15% faster" might be a bit misleading.

If norm_spirals() is invoked many times with the same arguments, then you can try memoization to cache the results and see if it makes a difference.

Related

Sympy: There remains some terms which shuold be obviously vanished

I want to calculate derivative of a function using following code.
import sympy
pi = sympy.symbols("pi")
class H(sympy.Function):
nargs = 1
def fdiff(self, argindex=1):
x = self.args[0]
return - sympy.functions.exp(-sympy.Pow(x, 2) / 2) / sympy.sqrt(2 * pi)
def G(a):
return (
(a + 1) * H(1 / sympy.sqrt(a))
- sympy.sqrt(a / (2 * pi)) * sympy.functions.exp(-1 / (2 * a))
)
x = sympy.symbols("x")
sympy.simplify(sympy.diff(G(x), x))
It is expected to be G'(x) = H(1 / sqrt(x)), but I got
Out[1]: H(1/sqrt(x)) - sqrt(2)*sqrt(x/pi)*exp(-1/(2*x))/(4*x) - sqrt(2)*sqrt(x/pi)*exp(-1/(2*x))/(4*x**2) + sqrt(2)*exp(-1/(2*x))/(4*sqrt(pi)*sqrt(x)) + sqrt(2)*exp(-1/(2*x))/(4*sqrt(pi)*x**(3/2))
The remaining terms should obviously be 0 when seen by human eye.
Then I tried to change two pis in the definition of H and G to sympy.pi, which returns H(1 / sqrt(x)) as I expected.
Why my first code returns some extra terms?
SymPy has built in rules which allow certain transformations to happen (automatically, sometimes) or to be prohibited (by default). When you defined pi as a Symbol, you created a generic symbol with the only assumption being that it is commutative. But the number pi is that and it is positive. That assumption allows something like sqrt(x/y) to automatically rewrite as sqrt(y)*sqrt(x)/y if y is positive:
>>> sqrt(x/y)
sqrt(x/y)
>>> sqrt(x/3)
sqrt(3)*sqrt(x)/3
If you take your last expression and substitution a positive value for the symbol pi you will get that rewrite and then the cancelling terms will cancel.
>>> print(sympy.simplify(sympy.diff(G(x), x))).subs(pi, 3)
H(1/sqrt(x))
As Johan points out, it is better in this case to just use SymPy's S.Pi:
>>> S.Pi.n(3)
3.14

Simplifying large coefficients in SymPy

I use SymPy for symbolic calculations in Python and get e.g. an expression like
Z = 2.02416176758139e+229 / (2.42579446609411e+232 * s + 9.8784848517664e+231)
Is there a function in SymPy (e.g. in sympy.simplify?) to get something like
Z = 2.024 / (2425.8 * s + 987.8)
The intent of your request can be met by allowing cancellation:
>>> 1/(1/Z)
1/(1198.41926912424*second + 488.028427864731)
or
>>> factor(Z)
0.00083443251102827/(1.0*second + 0.407226786516346)
But there is no built-in method of accessing the exponent in an exponential form.

Python: looking for a faster, less accurate sqrt() function

I'm looking for a cheaper, less accurate square root function for a high volume of pythagorus calculations that do not need highly accurate results. Inputs are positive integers, and I can upper-bound the input if necessary. Output to 1dp with accuracy +- 0.1 if good, but I could even get away with output to nearest integer +- 1. Is there anything built into python that can help with this? Something like math.sqrt() that does less approximations perhaps?
As I said in my comment, I do not think you will do much better in speed over math.sqrt in native python given it's linkage to C's sqrt function. However, your question indicates that you need to perform a lot of "Pythagoras calculations". I am assuming you mean you have a lot of triangles with sides a and b and you want to find the c value for all of them. If so, the following will be quick enough for you. This leverages vectorization with numpy:
import numpy as np
all_as = ... # python list of all of your a values
all_bs = ... # python list of all of your b values
cs = np.sqrt(np.array(all_as)**2 + np.array(all_bs)**2).tolist()
if your use-case is different, then please update your question with the kind of data you have and what operation you want.
However, if you really want a python implementation of fast square rooting, you can use Newton'ss method` to do this:
def fast_sqrt(y, tolerance=0.05)
prev = -1.0
x = 1.0
while abs(x - prev) > tolerance: # within range
prev = x
x = x - (x * x - y) / (2 * x)
return x
However, even with a very high tolerance (0.5 is absurd), you will most likely not beat math.sqrt. Although, I have no benchmarks to back this up :) - but I can make them for you (or you can do it too!)
#modesitt was faster than me :)
Newton's method is the way to go, my contribution is an implementation of Newton's method that is a bit faster than the one modesitt suggested (take sqrt(65) for example, the following method will return after 4 iterations vs fast_sqrt which will return after 6 iterations).
def sqrt(x):
delta = 0.1
runner = x / 2
while abs(runner - (x / runner)) > delta:
runner = ((x / runner) + runner) / 2
return runner
That said, math.sqrt will most certainly be faster then any implementation that you'll come with. Let's benchmark the two:
import time
import math
def timeit1():
s = time.time()
for i in range(1, 1000000):
x = sqrt(i)
print("sqrt took %f seconds" % (time.time() - s))
def timeit2():
s = time.time()
for i in range(1, 1000000):
x = math.sqrt(i)
print("math.sqrt took %f seconds" % (time.time() - s))
timeit1()
timeit2()
The output that I got on my machine (Macbook pro):
sqrt took 3.229701 seconds
math.sqrt took 0.074377 seconds

Fortran equivalent of python numpy.minimum

I am trying to re-write some python code in fortran, specifically the line
separation[a, :] = sum(np.minimum(1 - distances, distances) ** 2)
The important part being the use of np.minimum to take the element-wise minimum of two multi-dimensional arrays. Distances is a (3, N) array of N coordinates (x,y,z). I can't find a similar function in fortran, so I wrote my own using:
do b = 1, N
temp = 0.0
do c = 1, 3
if ((1 - distances(c, b)) .LE. distances(c, b)) then
temp = temp + (1 - distances(c, b)) ** 2
else
temp = temp + (distances(c, b)) ** 2
end if
end do
separation(a, b) = temp
end do
Unsurprisingly this code is very slow, I am not very experienced in fortran yet so any recommendations as to improving this code or suggesting an alternative method would be greatly appreciated.
I thought perhaps a where statement might help, as the following code in python works
separation[a, :] = sum(np.where(1 - distances <= distances, (1 - distances), distances) ** 2)
But while fortran has where statements, they seem to work rather differently to python ones and they don't seem to be much use here.
it is nearly the same. Most fortran intrinsics operate component-wise on arrays ( assuming you have at least fortran95 )
real a(2,4),b(4)
a=reshape([.1,.2,.3,.4,.5,.6,.7,.8],shape(a))
b=sum(min(1-a,a)**2,1)
write(*,'(4f5.2)')b
end
0.05 0.25 0.41 0.13
note fortran's sum would by default sum the whole array.

distance formula in python bug

I'm calculating the length of a line segment in python, but I don't understand why one piece of code gives me zero and the other gives the right answer.
This piece of code gives me zero:
def distance(a, b):
y = b[1]-a[1]
x = b[0]-a[0]
ans=y^2+x^2
return ans^(1/2)
This one gives me the right answer:
import math as math
def distance(a, b):
y = b[1]-a[1]
x = b[0]-a[0]
ans=y*y+x*x
return math.sqrt(ans)
Thank you.
In your first snippet you have written this:
ans^(1/2)
In Python the power operator is not ^, that's the XOR-operator. The power operator in Python is **. On top of that, in Python 2.x by default the result of the division of two integers is an integer, so 1/2 will evaluate as 0. The correct way would be this:
ans ** 0.5
And another thing, the function you have implemented here can be done a lot easier with math.hypot:
import math
def distance(a, b):
return math.hypot(b[0] - a[0], b[1] - a[1])
Try doing x**2 rather than x^2 ( which is XOR)
Or use the math.pow function
And also, 1/2 is 0 and not 0.5

Categories