I have a python script that finds the derivative of a function, within which is the gamma function. When substituting values in, instead of finding the digamma of the values and returning it as a float, sympy only returns polygamma (0, 1.05) or whatever the output is. Below is my code:
import mpmath
import time
import sympy
x = sympy.symbols ('x')
s = sympy.symbols ('s')
from sympy import S, I, pi, gamma, lambdify
Original = ((((sympy.pi**(x/2))*(s**x))/sympy.gamma((x/2)+1))-(((2*s)/(x**0.5))**x))
Prime = Original.diff (x)
Prime = lambdify ((x, s), Prime, modules = 'sympy')
for s_times_10 in range (1, 31):
s = float (int (s_times_10) / 10)
for x_times_10 in range (1, 151):
x = float ((int (x_times_10) / 10))
print ("x: " + str (x) + ", s: " + str (s))
print (Prime (x, s))
if (x > 0.3):
if (Prime (x + 0.1, s) < Prime (x, s)):
print ("MAXIMUM N LOCATED: " + str (x))
time.sleep (1)
break
print ("=======")
time.sleep (0.5)
And below is the output for the first 5 values of x within the for loop:
x: 0.1, s: 0.1
-0.579691734344519 - 0.432005861274674*polygamma(0, 1.05)
=======
x: 0.2, s: 0.1
-0.175935858863424 - 0.371829906705536*polygamma(0, 1.1)
=======
x: 0.3, s: 0.1
0.0107518316667914 - 0.31889065255819*polygamma(0, 1.15)
=======
x: 0.4, s: 0.1
0.098684205215577 - 0.27256963654143*polygamma(0, 1.2)
=======
x: 0.5, s: 0.1
0.133891927091406 - 0.232239660951436*polygamma(0, 1.25)
MAXIMUM N LOCATED: 0.5
As you can see, instead of giving me a simple float answer, it returns an unsolved polygamma function. How do I get rid of this and end up with a float as the final answer?
TLDR: Substituted values into a differentiated gamma function, and instead of returning a float it returned an unsolved polygamma function.
Often substituting a float into an symbolic SymPy function will lead to automatic floating point evaluation:
In [36]: sin(1)
Out[36]: sin(1)
In [37]: sin(1.0)
Out[37]: 0.841470984807897
In your case your expression has an exact integer as well as a float and so does not evaluate in floating point automatically:
In [38]: polygamma(0, 0.1)
Out[38]: polygamma(0, 0.1)
In [39]: polygamma(0.0, 0.1)
Out[39]: -10.4237549404111
Really though if you want floating point evaluation you should ask for it explicitly rather than depending on it happening implicitly:
In [40]: polygamma(0, 0.1).evalf()
Out[40]: -10.4237549404111
Related
I am trying to calculate the logarithmic maximum of n different bets. However, for this example, I have 2 independent simultaneous bets.
Bet 1 has a win probability of 30% and decimal odds of 12.80.
Bet 2 also has a win probability of 30% and decimal odds of 12.80.
To calculate the logarithmic maximum of 2 independent simultaneous bets, I need to work out the probability of all 4 combinations:
Bet 1 Winning/Bet 2 Winning
Bet 1 Winning/Bet 2 Losing
Bet 1 Losing/Bet 2 Winning
Bet 1 Losing/Bet 2 Losing
Assuming x0 is the amount between 0% and 100% of my portfolio on Bet 1 and x1 is the amount between 0% and 100% of my portfolio on Bet 2, the mathematically optimum stakes on both bets can be solved by maximising the following expression:
0.09log(1 + 11.8x0 + 11.8x1) + 0.21log(1 + 11.8x0 - x1) + 0.21log(1 - x0 + 11.8x1) + 0.49log(1 - x0 - x1) which equals x0: 0.214648, x1: 0.214648
(The 11.8 is not a typo, it is simply 12.8 - 1, the profit).
I have tried to implement this calculation in python, with little success. Here is my current code that I need assistance with:
from scipy.optimize import minimize
from math import log
from itertools import product
from sympy import symbols
Bets = [[0.3, 12.8], [0.3, 12.8]]
Odds = [([i[0], 1 - i[0]]) for i in Bets]
OddsList = list(product(Odds[0], Odds[1]))
#Output [(0.3, 0.3), (0.3, 0.7), (0.7, 0.3), (0.7, 0.7)]
Probability = []
for i in range(0, len(OddsList)):
Probability.append(OddsList[i][0] * OddsList[i][1])
#Output [0.09, 0.21, 0.21, 0.49]
Win = [([i[1] - 1, - 1]) for i in Bets]
WinList = list(product(Win[0], Win[1]))
#Output [(11.8, 11.8), (11.8, -1), (-1, 11.8), (-1, -1)]
xValues = []
for j in range(0, len(Bets)):
xValues.append(symbols('x' + str(j)))
#Output [x0, x1]
def logarithmic_return(xValues, Probability, WinList):
Sum = 0
for i in range(0, len(Probability)):
Sum += Probability[i] * log (1 + (WinList[i][0] * xValues[0]) + ((WinList[i][1] * xValues[1])))
return Sum
minimize(logarithmic_return(xValues, Probability, WinList))
#Error TypeError: Cannot convert expression to float
# However, when I do this, it works perfectly:
logarithmic_return([0.214648, 0.214648], Probability, WinList)
#Output 0.3911621722324154
Seems like this is your first time mixing numerical Python with symbolic. In short, you cannot use numerical functions (like math.log or scipy.optimize.minimize) on symbolic expressions. You need to convert your symbolic expressions to lambda function first.
Let's try to fix it:
from scipy.optimize import minimize
from itertools import product
from sympy import symbols, lambdify, log
import numpy as np
Bets = [[0.3, 12.8], [0.3, 12.8]]
Odds = [([i[0], 1 - i[0]]) for i in Bets]
OddsList = list(product(Odds[0], Odds[1]))
#Output [(0.3, 0.3), (0.3, 0.7), (0.7, 0.3), (0.7, 0.7)]
Probability = []
for i in range(0, len(OddsList)):
Probability.append(OddsList[i][0] * OddsList[i][1])
#Output [0.09, 0.21, 0.21, 0.49]
Win = [([i[1] - 1, - 1]) for i in Bets]
WinList = list(product(Win[0], Win[1]))
#Output [(11.8, 11.8), (11.8, -1), (-1, 11.8), (-1, -1)]
xValues = []
for j in range(0, len(Bets)):
xValues.append(symbols('x' + str(j)))
#Output [x0, x1]
def logarithmic_return(xValues, Probability, WinList):
Sum = 0
for i in range(0, len(Probability)):
Sum += Probability[i] * log (1 + (WinList[i][0] * xValues[0]) + ((WinList[i][1] * xValues[1])))
return Sum
# this is the symbolic expression
expr = logarithmic_return(xValues, Probability, WinList)
# convert the symbolic expression to a lambda function for
# numerical evaluation
f = lambdify(xValues, expr)
# minimize expect a function of the type f(x), not f(x0, x1).
# hence, we create a wrapper function
func_to_minimize = lambda x: f(x[0], x[1])
initial_guess = [0.5, 0.5]
minimize(func_to_minimize, initial_guess)
# fun: -inf
# hess_inv: array([[1, 0],
# [0, 1]])
# jac: array([nan, nan])
# message: 'NaN result encountered.'
# nfev: 3
# nit: 0
# njev: 1
# status: 3
# success: False
# x: array([0.5, 0.5])
As you can see, the minimization works. However it didn't find any solution. This is your problem to fix. Here, I just hint you the shape of the function you are trying to minimize.
The problem here is that scipy.optimize.minimize wants to be passed a function. You are not passing a function. You are CALLING your function, and passing its return (a float) to minimize.
You need:
minimize( logarithmic_return, xValues, args=(Probability, WinList) )
The code below is used to find the x* using Gradient Descent and the problem here is that the final result of req8 is -1.00053169969469 and when I round it, it results in -1.00000000000000.
How can I round to get -1.0 or -1.00 instead, without using any other module?
from sympy import *
import numpy as np
global x, y, z, t
x, y, z, t = symbols("x, y, z, t")
def req8(f, eta, xi, tol):
dx = diff(f(x), x)
arrs = [xi]
for i in range(100):
x_star = arrs[-1] - eta * round(dx.subs(x, arrs[-1]), 7)
if abs(dx.subs(x, x_star)) < tol:
break
arrs.append(x_star)
print(arrs[-1])
print(round(arrs[-1], 2))
def f_22(x):
return x**2 + 2*x - 1
req8(f_22, 0.1, -5, 1e-3)
Use the format function to print your rounded result with the desired number of digits (2, in the example below):
print("{:.2f}".format(round(arrs[-1], 2)))
EDIT: as pointed out by #SultanOrazbayev, rounding is no longer necessary here and you may print your result using the following expression:
print("{:.2f}".format(arrs[-1]))
After having tried many things, I thought it would be good to ask on SO. My problem is fairly simple: how can I solve the following equation using Sympy?
Equation
I want to solve this for lambda_0 and q is an array of size J containing elments between 0 and 1 that sum op to 1 (discrete probability distribution). I tried the following:
from sympy.solvers import solve
from sympy import symbols, summation
p = [0.2, 0.3, 0.3, 0.1, 0.1]
l = symbols('l')
j = symbols('j')
eq= summation(j*q[j]/(l-j), (j, 0, 4))
s= solve(eq, l)
But this gives me an error for q[j] as j is a Symbol object here and not an integer. If I don't make j as symbol, I cannot evaluate the eq expression. Does anyone know how to do this?
Edit: p = 1-q in the above, hence q[j] should have been replaced by (1-p[j]).
List p needs to be converted into symbolic array before it can be indexed with symbolic value j.
from sympy.solvers import solve
from sympy import symbols, summation, Array
p = Array([0.2, 0.3, 0.3, 0.1, 0.1])
l, j = symbols('l j')
eq = summation(j * (1 - p[j]) / (l - j), (j, 0, 4))
s = solve(eq - 1, l) # [1.13175762143963 + 9.29204634892077e-30*I, 2.23358705810004 - 1.36185313905566e-29*I, 3.4387382449005 + 3.71056356734273e-30*I, 11.5959170755598 + 6.15921474293073e-31*I]
(assuming your p stands for 1 - q)
How I can equate an equation to zero then solve it (the purpose is to eliminate the denominator).
y=(x**2-2)/3*x
In Matlab this works:
solution= solve(y==0,x)
but not in python.
from sympy import *
x, y = symbols('x y')
y=(x**2-2)/3*x
# set the expression, y, equal to 0 and solve
result = solve(Eq(y, 0))
print(result)
Another solution:
from sympy import *
x, y = symbols('x y')
equation = Eq(y, (x**2-2)/3*x)
# Use sympy.subs() method
result = solve(equation.subs(y, 0))
print(result)
Edit (even simpler):
from sympy import *
x, y = symbols('x y')
y=(x**2-2)/3*x
# solve the expression y (by default set equal to 0)
result = solve(y)
print(result)
If you want only to eliminate the denominator, then you can split it into numerator and denominator. If the equation is already appear as a fraction and you want the numerator then
>>> y=(x**2-2)/(3*x); y # note parentheses around denom, is that what you meant?
(x**2 - 2)/(3*x)
>>> numer(_)
x**2 - 2
But if the equation appears as a sum then you can put it over a denominator and perhaps factor to identify numerator factors that must be zero in order to solve the equation:
>>> y + x/(x**2+2)
x/(x**2 + 2) + (x**2 - 2)/(3*x)
>>> n, d = _.as_numer_denom(); (n, d)
(3*x**2 + (x**2 - 2)*(x**2 + 2), 3*x*(x**2 + 2))
>>> factor(n)
(x - 1)*(x + 1)*(x**2 + 4)
>>> solve(_)
[-1, 1, -2*I, 2*I]
You don't have to factor the numerator before attempting to solve, however. But I sometimes find it useful when working with a specific equation.
If you have an example of an equation that is solved quickly elsewhere but not in SymPy, please post it.
I want to double integrate a function. But I get different results when using dblquad over scipy.integrate and matlab. A python implementation of my function to double integrate is like this:
###Python implementation##
import numpy as np
from scipy.integrate import dblquad
def InitialCondition(x_b, y_b, m10, m20, N0):
IC = np.zeros((len(x_b)-1,len(y_b)-1))
for i in xrange(len(x_b) - 1):
for j in xrange(len(y_b) - 1):
IC[i,j], abserr = dblquad(ExponenIC, x_b[i], x_b[i + 1], lambda x: y_b[j], lambda x: y_b[j+1], args=(m10, m20, N0), epsabs=1.49e-15, epsrel=1.49e-15)
return IC
def ExponenIC(x, y, m10, m20, N0):
retVal = (16 * N0) / (m10 * m20) * (x / m10)* (y / m20) * np.exp(-2 * (x / m10) - 2 * (y / m20))
return retVal
if __name__=='__main__':
x_min, x_max = 0.0004, 20.0676
x_b = np.exp(np.linspace(np.log(x_min), np.log(x_max), 4))
y_b = np.copy(x_b)
m10, m20, N0 = 0.04, 0.04, 1
print InitialCondition(x_b, y_b, m10, m20, N0)
But if I repeat the same in matlab, with equivalent implementation and same input as shown below:
%%%Matlab equivalent%%%
function IC = test(x_b, y_b, m10, m20, N0)
for i = 1:length(x_b)-1
for j = 1:length(y_b)-1
IC(i, j) = dblquad(#ExponenIC, x_b(i), x_b(i+1), y_b(j), y_b(j+1), 1e-6, #quad, m10, m20, N0);
end
end
return
function retVal = ExponenIC(x, y, m10, m20, N0)
retVal = (16 * N0) / (m10*m20) * (x / m10) .* (y / m20) .* exp(-2*(x/m10) - 2 * (y/m20));
return
% for calling
x_min = 0.0004;
x_max = 20.0676;
x_b = exp(linspace(log(x_min), log(x_max), 4));
y_b = x_b;
m10 = 0.04;
m20 = 0.04;
N0 = 1;
I = test(x_b, y_b, m10, m20, N0)
Scipy dblquad returns:
[[ 2.84900512e-02 1.40266599e-01 7.34019842e-12]
[ 1.40266599e-01 6.90582083e-01 3.61383932e-11]
[ 7.28723691e-12 3.58776449e-11 1.89113430e-21]]
and Matlab dblquad returns:
IC =
28.4901e-003 140.2666e-003 144.9328e-012
140.2666e-003 690.5820e-003 690.9716e-012
144.9328e-012 690.9716e-012 737.2926e-021
I have tried to change tolerances and order of input but two solutions remained always different. Thus, I am not able to understand which one is accurate and I would like to have it correct in python. Can someone suggest if this is a bug in either of the dblquadsolver or in my code somewhere?
From a glance at the results, the repetition of 690 in the Matlab output (in places where Python has different results) casts a doubt on Matlab's performance.
One of problems with using the (deprecated) function dblquad in Matlab is that the tolerance you specify to it is absolute (to my understanding). This is why when you specify 1e-6, the integrals of order 1e-11 come out wrong. When you replace it by 1e-12, the computation takes a lot longer (because the larger integrals must now be computed to a great precision), yet the smallest of integrals, of size 1e-21, is still wrong.
Hence, you should use a routine that supports relative error tolerance, such as integral2.
Replacing the Matlab line with dblquad by
IC(i, j) = integral2(#(x,y) ExponenIC(x,y, m10, m20, N0), x_b(i), x_b(i+1), y_b(j), y_b(j+1), 'RelTol', 1e-12);
I get
0.0284900512006556 0.14026659933722 7.10653215130477e-12
0.14026659933722 0.690582082532588 3.51109000906259e-11
7.10653215130476e-12 3.5110900090626e-11 1.78512164747727e-21
which roughly agrees with Python output. Still, there remains a substantial difference. To settle the matter definitely, I calculated the integrals analytically. The exact result is
0.0284900512006717 0.140266599337199 7.28723691243472e-12
0.140266599337199 0.690582082532677 3.58776449039036e-11
7.28723691243472e-12 3.58776449039036e-11 1.86394265998016e-21
Neither package achieved the desired accuracy, but Python/scipy was much closer.
For completeness, the loop that outputs the analytic solution:
function IC = test(x_b, y_b, m10, m20, N0)
F = #(x,a) -0.25*exp(-2*x/a)*(2*x+a);
for i = 1:length(x_b)-1
for j = 1:length(y_b)-1
IC(i,j) = (16 * N0) / (m10*m20) *(F(x_b(i+1),m10)-F(x_b(i),m10)) * (F(y_b(j+1),m20)-F(y_b(j),m20));
end
end
end