Can it be that `sympy` is much, much slower than Mathematica? - python

I'm reproducing Mathematica results using Sympy, and I'm new to the latter, so I might be doing things wrong. However, I noticed that some stuff that took a minute at max using Mathematica is just taking forever (read: did not finish after I started it an hour ago) in sympy. That applies both to Simplify(), and solve(). Am I doing something wrong, or is that really the case?
I'll attach my solve() case:
import sympy as sp
from sympy import init_printing
init_printing()
p, r, c, p, y, Lambda = sp.symbols('p r c p y Lambda')
F = sp.Symbol('F')
eta1 = lambda p: 1/(1-sp.exp(-Lambda) * sp.exp(-Lambda)*(sp.exp(Lambda) - 1 - Lambda))
eta2 = lambda p: 1/(1-sp.exp(-Lambda)) * sp.exp(-Lambda)/(1-F) * (sp.exp(Lambda*(1- F)) - 1 - Lambda*(1-F))
eta = lambda p: 1 - eta1(p) + eta2(p)
etaOfR = sp.limit(eta(p), F, 1)
S = lambda p: eta(p)*y/p*(p-c)
SOfR = etaOfR*y/r*(r-c)
sp.solve(S(p)-SOfR, F)
The corresponding Mathematica code:
ClearAll[r, p, lambda, a, A, c, eta, f, y, constant1, constant2, eta, \
etaOfR]
constant1[lambda_] := Exp[-lambda]/(1 - Exp[-lambda]);
constant2[lambda_] := constant1[lambda]*(Exp[lambda] - 1 - lambda);
eta[lambda_, f_] :=
1 - constant2[lambda] +
constant1[lambda]*(Exp[lambda*(1 - f)] - 1 - lambda*(1 - f)) ;
etaOfR[lambda_] := Limit[eta[lambda, f], f -> 1];
expression1[lambda_, f_] :=
y/p (p - c) eta[lambda, f] == y/r (r - c) etaOfR[lambda];
Solve[expression1[lambda, f], f] // FullSimplify
Output:
{{f -> (-(1 + lambda) p r +
c (lambda p + r) + (c -
p) r ProductLog[-E^(((-c lambda p + (c (-1 + lambda) +
p) r)/((c - p) r)))])/(lambda (c - p) r)}}

The correct way to do it is:
from sympy import *
init_printing()
p, r, c, p, y, lam, f = symbols('p r c p y lambda f')
constant1 = exp(-lam) / (1 - exp(-lam))
constant2 = constant1 * (exp(lam) - 1 - lam)
eta = 1 - constant2 + constant1 * (exp(lam * (1-f)) - 1 - lam * (1 - f))
etaOfR = limit(eta, f, 1)
expression1 = Eq(y / p * (p - c) * eta,
y / r * (r - c) * etaOfR)
solve(expression1, f)
You can also check the notebook here:
http://nbviewer.ipython.org/gist/jankoslavic/0ad7d5c2731d425dabb3
The results is equal to the one from Mathematica (see last line) and Sympy performance is comparable.

Related

Numpy to solve multi-variable algebraic function

Let's assume we know the following:
a = 1
b = 4
c = 7
d = 2
e = 2
f = 9
With these six variables, we can solve for X, Y, and Z as follows:
X = (b - a) / (d + e)
Y = 2 * np.sin(X/2) * ((c / X) + f)
Z = 2 * np.sin(X/2) * ((a / X) + d)
print(X)
print(Y)
print(Z)
0.75
13.42999273315508
2.4418168605736503
Now, let's flip things around and assume that we're given the values of X, Y, and Z, as well as d, e, and f.
How would we solve for the values of a, b, and c? My algebra is shaky. Is this something that Numpy can handle?
Thanks!
Numpy, no. (Or rather, not as easily, or accurately.)
Sympy, yes.
Declare a, b and c as symbols.
Create expressions that should equal to zero (by moving the left hand side of the equation to the right hand side and changing the sign).
Use sympy.sin instead of math.sin or np.sin.
Use sympy.solve to get the solution of the system.
import sympy
from sympy.abc import a, b, c
X = 0.75
Y = 13.42999273315508
Z = 2.4418168605736503
d = 2
e = 2
f = 9
e1 = (b - a) / (d + e) - X
e2 = 2 * sympy.sin(X/2) * ((c / X) + f) - Y
e3 = 2 * sympy.sin(X/2) * ((a / X) + d) - Z
sympy.solve([e1, e2, e3])
# => {a: 1.00000000000000, b: 4.00000000000000, c: 7.00000000000000}
Solving equations with unknown variables can be done in Sympy.
from sympy import symbols, solve, Eq, sin
a, b, c, d, e, f, X, Y, Z = symbols("a b c d e f X Y Z")
eqns = [
Eq(X, (b - a) / (d + e)),
Eq(Y, 2 * sin(X / 2) * ((c / X) + f)),
Eq(Z, 2 * sin(X / 2) * ((a / X) + d)),
]
assignments = {a: 1, b: 4, c: 7, d: 2, e: 2, f: 9}
print(solve([eq.subs(assignments) for eq in eqns], [X, Y, Z]))
Output:
[(3/4, 110*sin(3/8)/3, 20*sin(3/8)/3)]
To solve for a, b, c just replace X, Y, Z in solve and add their values in the assignments dict.

np.int64 behaves differently from int in math-operations

I have come across a very strange problem where i do a lot of math and the result is inf or nan when my input is of type <class 'numpy.int64'>, but i get the correct (checked analytically) results when my input is of type <class 'int'>. The only library functions i use are np.math.factorial(), np.sum() and np.array(). I also use a generator object to sum over series and the Boltzmann constant from scipy.constants.
My question is essentially this: Are their any known cases where np.int64 objects will behave very differently from int objects?
When i run with np.int64 input, i get the RuntimeWarnings: overflow encountered in long_scalars, divide by zero encountered in double_scalars and invalid value encountered in double_scalars. However, the largest number i plug into the factorial function is 36, and i don't get these warnings when i use int input.
Below is a code that reproduces the behaviour. I was unable to find out more exactly where it comes from.
import numpy as np
import scipy.constants as const
# Some representible numbers
sigma = np.array([1, 2])
sigma12 = 1.5
mole_weights = np.array([10,15])
T = 100
M1, M2 = mole_weights/np.sum(mole_weights)
m0 = np.sum(mole_weights)
fac = np.math.factorial
def summation(start, stop, func, args=None):
#sum over the function func for all ints from start to and including stop, pass 'args' as additional arguments
if args is not None:
return sum(func(i, args) for i in range(start, stop + 1))
else:
return sum(func(i) for i in range(start, stop + 1))
def delta(i, j):
#kronecker delta
if i == j:
return 1
else:
return 0
def w(l, r):
# l,r are ints, return a float
return 0.25 * (2 - ((1 / (l + 1)) * (1 + (-1) ** l))) * np.math.factorial(r + 1)
def omega(ij, l, r):
# l, r are int, ij is and ID, returns float
if ij in (1, 2):
return sigma[ij - 1] ** 2 * np.sqrt(
(np.pi * const.Boltzmann * T) / mole_weights[ij - 1]) * w(l, r)
elif ij in (12, 21):
return 0.5 * sigma12 ** 2 * np.sqrt(
2 * np.pi * const.Boltzmann * T / (m0 * M1 * M2)) * w(l, r)
else:
raise ValueError('(' + str(ij) + ', ' + str(l) + ', ' + str(r) + ') are non-valid arguments for omega.')
def A_prime(p, q, r, l):
'''
p, q, r, l are ints. returns a float
'''
F = (M1 ** 2 + M2 ** 2) / (2 * M1 * M2)
G = (M1 - M2) / M2
def inner(w, args):
i, k = args
return ((8 ** i * fac(p + q - 2 * i - w) * (-1) ** (r + i) * fac(r + 1) * fac(
2 * (p + q + 2 - i - w)) * 2 ** (2 * r) * F ** (i - k) * G ** w) /
(fac(p - i - w) * fac(q - i - w) * fac(r - i) * fac(p + q + 1 - i - r - w) * fac(2 * r + 2) * fac(
p + q + 2 - i - w)
* 4 ** (p + q + 1) * fac(k) * fac(i - k) * fac(w))) * (
2 ** (2 * w - 1) * M1 ** i * M2 ** (p + q - i - w)) * 2 * (
M1 * (p + q + 1 - i - r - w) * delta(k, l) - M2 * (r - i) * delta(k, l - 1))
def sum_w(k, i):
return summation(0, min(p, q, p + q + 1 - r) - i, inner, args=(i, k))
def sum_k(i):
return summation(l - 1, min(l, i), sum_w, args=i)
return summation(l - 1, min(p, q, r, p + q + 1 - r), sum_k)
def H_i(p, q):
'''
p, q are ints. Returns a float
'''
def inner(r, l):
return A_prime(p, q, r, l) * omega(12, l, r)
def sum_r(l):
return summation(l, p + q + 2 - l, inner, args=l)
val = 8 * summation(1, min(p, q) + 1, sum_r)
return val
p, q = np.int64(8), np.int64(8)
print(H_i(p,q)) #nan
print(H_i(int(p) ,int(q))) #1.3480582058153066e-08
Numpy's int64 is a 64-bit integer, meaning it consists of 64 places that are either 0 or 1. Thus the smallest representable value is -2**63 and the biggest one is 2**63 - 1
Python's int is essentially unlimited in length, so it can represent any value. It is equivalent to a BigInteger in Java. It's stored as a list of int64s essentially that are considered a single large number.
What you have here is a classic integer overflow. You mentioned that you "only" plug 36 into the factorial function, but the factorial function grows very fast, and 36! = 3.7e41 > 9.2e18 = 2**63 - 1, so you get a number bigger than you can represent in an int64!
Since int64s are also called longs this is exactly what the warning overflow encountered in long_scalars is trying to tell you!

Sympy - Integration is slow when expression contains many symbols

Say I have the following expression which I would like to integrate over the variable z from 0 to L.
import sympy as sp
mdot, D, R, alpha, beta, xi, mu0, q, cp, Tin, L = sp.symbols("\dot{m}, D, R, alpha, beta, xi, mu_0, q, c_p, T_in, L", real=True, positive=True, constant=True)
z = sp.symbols("z", real=True, positive=True)
n = sp.Symbol("n", real=True)
firstexpr = 8 * mdot**2 * R / (sp.pi**2 * D**5) * (alpha + beta * (sp.pi * D * mu0 / (4 * mdot))**xi * (q * z / (mdot * cp) + Tin)**(n * xi)) * (q * z / (mdot * cp) + Tin)
res1 = sp.integrate(firstexpr, (z, 0, L), conds="none")
This will take forever: I had to stop the computation after 10 minutes on my pc without getting an answer.
Situation improves dramatically if I rewrite my expression so that it contains only the minimum number of constant symbols, integrating it, and finally substituting the original symbols:
a = 8 * mdot**2 * R / (sp.pi**2 * D**5)
b = beta * (sp.pi * D * mu0 / (4 * mdot))**xi
c = q / (mdot * cp)
_a, _b, _c = sp.symbols("a, b, c", real=True, positive=True, constant=True)
secondexpr = _a * (alpha + _b * (_c * z + Tin)**(n * xi)) * (_c * z + Tin)
res2 = sp.integrate(secondexpr, (z, 0, L), conds="none")
sp.simplify(res2.subs([(_a, a), (_b, b), (_c, c)]))
Why is sympy taking extremely long time in the first case? Did I miss some assumption in the creation of my symbols?

Bracket one of two roots in root finding algorithm for roots of a multivariate function

Apologies for the (maybe misleading) title and the probably confusing question itself, i struggle a lot with wording my problem and especially compressing it into one sentence for the title. I want to find the roots of a function f(w, t, some_other_args) with two variables, w and t, using python. The real function structure is really long and complicated, you can find it on the end of this post. The important thing is that it contains the following line:
k = 1.5 * m.sqrt((1.0 - w) / (1.0 - 0.25 * w))
This means that w can't exceed 1, because that would lead to calculating the square root of a negative number, which, of course, is impossible. I have algorithms for calculating the approximate values of w and t using other values in my function, but they are very inaccurate.
So, i try to calculate the roots with scipy.optimize.fsolve (after trying literally every root finding algorithm i could find online, i found this one to be the best for my function) using these approximate values as starting points, which would look like this:
solution = optimize.fsolve(f, x0=np.array([t_approx, w_approx]), args=(some_other_args))
For most values, this works perfectly fine. If w is too close to 1, however, there always comes a point when fsolve tries some value bigger than 1 for w, which, in turn, raises a ValueError(because calculating the root of a negative number is mathematically impossible). This is an example printing out the values that fsolveis using, where w should be somewhere around 0.997:
w_approx: 0.9960090844989311
t_approx: 24.26777844720981
Values: t:24.26777844720981, w:0.9960090844989311
Values: t:24.26777844720981, w:0.9960090844989311
Values: t:24.26777844720981, w:0.9960090844989311
Values: t:24.267778808827888, w:0.9960090844989311
Values: t:24.26777844720981, w:0.996009099340623
Values: t:16.319554685876746, w:1.0096680915775516
solution = optimize.fsolve(f, x0=np.array([t_approx, w_approx]), args=(some_other_args))
File "C:\Users\...\venv\lib\site-packages\scipy\optimize\minpack.py", line 148, in fsolve
res = _root_hybr(func, x0, args, jac=fprime, **options)
File "C:\Users\...\venv\lib\site-packages\scipy\optimize\minpack.py", line 227, in _root_hybr
ml, mu, epsfcn, factor, diag)
File "C:\Users\...\algorithm.py", line 9, in f
k = 1.5 * m.sqrt((1.0 - w) / (1.0 - 0.25 * w))
ValueError: math domain error
So, how can i tell optimize.fsolve that w can't get bigger than 1? Or what are alternative algorithms for doing something like this (i know about brentq and so on, but all of those require giving an interval for both roots, which i don't want to do.)?
Code for testing (What's important to note here: even though func theoretically is supposed to calculate R and T given t and w, i have to use it the other way around. It's a bit clunky, but i simply don't manage to rewrite the function so that it accepts T, R to calculate t, w - it's a bit too much for my mediocre mathematical expertise ;)) :
import math as m
from scipy import optimize
import numpy as np
def func(t, w, r_1, r_2, r_3):
k = 1.5 * m.sqrt((1.0 - w) / (1.0 - 0.25 * w))
k23 = 2 * k / 3
z1 = 1 / (1 + k23)
z2 = 1 / (1 - k23)
z3 = 3 * ((1 / 5 + r_1 - r_2 - 1 / 5 * r_1 * r_2) / (z1 - r_2 * z2)) * m.exp(t * (k - 1))
z4 = -(z2 - r_2 * z1) / (z1 - r_2 * z2) * m.exp(2 * k * t)
z5 = -(z1 - r_2 * z2) / (z2 - r_2 * z1)
z6 = 3 * (1 - r_2 / 5) / (z2 - r_2 * z1)
beta_t = r_3 / (z2 / z1 * m.exp(2 * k * t) + z5) * (z6 - 3 / (5 * z1) * m.exp(t * (k - 1)))
alpha_t = beta_t * z5 - r_3 * z6
beta_r = (z3 - r_1 / 5 / z2 * m.exp(-2 * t) * 3 - 3 / z2) / (z1 / z2 + z4)
alpha_r = -z1 / z2 * beta_r - 3 / z2 - 3 / 5 * r_1 / z2 * m.exp(-2 * t)
It_1 = 1 / 4 * w / (1 - 8 / 5 * w) * (alpha_t * z2 * m.exp(-k * t) + beta_t * z1 * m.exp(k * t) + 3 * r_3 * m.exp(-t))
Ir_1 = (1 / 4 * w / (1 - 8 / 5 * w)) * (z1 * alpha_r + z2 * beta_r + 3 / 5 + 3 * r_1 * m.exp(-2 * t))
T = It_1 + m.exp(-t) * r_3
R = Ir_1 + m.exp(-2 * t) * r_1
return [T, R]
def calc_1(t, w, T, R, r_1, r_2, r_3):
t_begin = float(t[0])
T_new, R_new = func(t_begin, w, r_1, r_2, r_3)
a = abs(-1 + T_new/T)
b = abs(-1 + R_new/R)
return np.array([a, b])
def calc_2(x, T, R, r_1, r_2, r_3):
t = x[0]
w = x[1]
T_new, R_new = func(t, w, r_1, r_2, r_3)
a = abs(T - T_new)
b = abs(R - R_new)
return np.array([a, b])
def approximate_w(R):
k = (1 - R) / (R + 2 / 3)
w_approx = (1 - ((2 / 3 * k) ** 2)) / (1 - ((1 / 3 * k) ** 2))
return w_approx
def approximate_t(w, T, R, r_1, r_2, r_3):
t = optimize.root(calc_1, x0=np.array([10, 0]), args=(w, T, R, r_1, r_2, r_3))
return t.x[0]
def solve(T, R, r_1, r_2, r_3):
w_x = approximate_w(R)
t_x = approximate_t(w_x, T, R, r_1, r_2, r_3)
sol = optimize.fsolve(calc_2, x0=np.array([t_x, w_x]), args=(T, R, r_1, r_2, r_3))
return sol
# Values for testing:
T = 0.09986490557943692
R = 0.8918728343037964
r_1 = 0
r_2 = 0
r_3 = 1
print(solve(T, R, r_1, r_2, r_3))
What about logisticing the argument that you want to constrain? I mean, inside f, you could do
import numpy as np
def f(free_w, ...):
w = 1/(1 + np.exp(-free_w)) # w will always lie between 0 and 1
...
return zeros
And then, you would just have to apply the same logistic-transformation to the solution value of free_w to get w*. See
solution = optimize.fsolve(f, x0=np.array([t_approx, w_approx]), args=(some_other_args))
free_w = solution[0]
w = 1/(1 + np.exp(-free_w))
Your reported error occurs as fsolve can not deal with the implicit restrictions in the conversion of w to k. This can be solved radically by inverting that dependence, making func dependent on t and k instead.
def w2k(w): return 3 * m.sqrt((1.0 - w) / (4.0 - w))
#k = 1.5 * m.sqrt((1.0 - w) / (1.0 - 0.25 * w))
# (k/3)**2 * (4-w)= 1-w
def k2w(k): return 4 - 3/(1-(k/3)**2)
def func(t, k, r_1, r_2, r_3):
w = k2w(k)
print "t=%20.15f, k=%20.15f, w=%20.15f"%(t,k,w)
...
Then remove the absolute values from the function values in calc1 and calc2. This only renders your solutions as non-differentiable points which is bad for any root-finding algorithm. Sign changes and smooth roots are good for Newton-like methods.
def calc_2(x, T, R, r_1, r_2, r_3):
t = x[0]
k = x[1]
T_new, R_new = func(t, k, r_1, r_2, r_3)
a = T - T_new
b = R - R_new
return np.array([a, b])
It makes not much sense to find the value for t by solving the equation keeping w resp. k fixed, it just doubles the computational effort.
def approximate_k(R):
k = (1 - R) / (R + 2 / 3)
return k
def solve(T, R, r_1, r_2, r_3):
k_x = approximate_k(R)
t_x = 10
sol = optimize.fsolve(calc_2, x0=np.array([t_x, k_x]), args=(T, R, r_1, r_2, r_3))
return sol
t,k = solve(T, R, r_1, r_2, r_3)
print "t=%20.15f, k=%20.15f, w=%20.15f"%(t, k, k2w(k))
With these modifications the solution
t= 14.860121342410327, k= 0.026653140486605, w= 0.999763184675043
is found within 15 function evaluations.
You should try defining explicitly your function before optimizing it, that way you can check for domain more easily.
Essentially you have a function of T and R. this worked for me:
def func_to_solve(TR_vector, r_1, r_2, r_3):
T, R = TR_vector # what you are trying to find
w_x = approximate_w(R)
t_x = approximate_t(w_x, T, R, r_1, r_2, r_3)
return (calc_2([t_x, w_x], T, R, r_1, r_2, r_3))
def solve(TR, r_1, r_2, r_3):
sol = optimize.fsolve(func_to_solve, x0=TR, args=(r_1, r_2, r_3))
return sol
Also, replace m.exp by np.exp

Making numeric double integration more efficent

So i've made a simple program for numerically aproximating double integral, which accepts that the bounds of the inner integral are funcions:
def double_integral(func, limits, res=1000):
t = time.clock()
t1 = time.clock()
t2 = time.clock()
s = 0
a, b = limits[0], limits[1]
outer_values = np.linspace(a, b, res)
c_is_func = callable(limits[2])
d_is_func = callable(limits[3])
for y in outer_values:
if c_is_func:
c = limits[2](y)
else:
c = limits[2]
if d_is_func:
d = limits[3](y)
else:
d = limits[3]
dA = ((b - a) / res) * ((d - c) / res)
inner_values = np.linspace(c, d, res)
for x in inner_values:
t2 = time.clock() - t2
s += func(x, y) * dA
t1 = time.clock() - t1
t = time.clock() - t
return s, t, t1 / res, t2 / res**2
This is, however, terribly slow. When res=1000, such that the integral is a sum of a million parts, it takes about 5 seconds to run, but the answer is only correct to about the 3rd decimal in my experience. Is there any way to speed this up?
The code i am running to check the integral is
def f(x, y):
if (4 - y**2 - x**2) < 0:
return 0 #This is to avoid taking the root of negarive #'s
return np.sqrt(4 - y**2 - x**2)
def c(y):
return np.sqrt(2 * y - y**2)
def d(y):
return np.sqrt(4 - y**2)
# b d
# S S f(x,y) dx dy
# a c
a, b, = 0, 2
print(double_integral(f, [a, b, c, d]))
The integral is eaqual to 16/9
Edit
so i got a great answer over at coderewiev, but i am still baffeled by how scipy.integrate.dblquad seem to give me the wrong answer (see comment). does anyone have an answer for this?

Categories