scipy Multinomial pmf return nan - python

I'm trying to use the multinominal.pmf function from scipy.stats (python).
When I use this function where all probabilities in the input bigger than zero, it work fine. The problem is when I want to use the function when one of the probabilities is zero.
The following example shows what I mean:
In [18]: multinomial.pmf([3, 3, 0], 6, [1/3.0, 1/3.0, 1/3.0])
Out[18]: 0.027434842249657095
In [19]: multinomial.pmf([3, 3, 0], 6, [2/3.0, 1/3.0, 0])
Out[19]: nan
As can be seen, in the first time where all probabilities > 0 there is no problem to use the function. However when I change one of the probabilities to zero, the function return nan, even through the function should return 0.21948.
Is there a way (in python) to calculate the pmf when one of the probabilities is zero? either another way that can handle it, or a work around for this function.
additional info
The value that the function in the example should have returned I calculated using mnpdf function in matlab. However since the rest of my code is in python I prefer to find a way to calculate it in python.

Good spot! This is a bug in scipy. The source code can be found here.
Line 3031 to 3051:
def pmf(self, x, n, p):
return np.exp(self.logpmf(x, n, p))
Line 2997 to 3017:
def logpmf(self, x, n, p):
n, p, npcond = self._process_parameters(n, p)
Line 2939 to 2958:
def _process_parameters(self, n, p):
p = np.array(p, dtype=np.float64, copy=True)
p[...,-1] = 1. - p[...,:-1].sum(axis=-1)
# true for bad p
pcond = np.any(p <= 0, axis=-1) # <- Here is why!!!
pcond |= np.any(p > 1, axis=-1)
n = np.array(n, dtype=np.int, copy=True)
# true for bad n
ncond = n <= 0
return n, p, ncond | pcond
The line pcond = np.any(p <= 0, axis=-1) results in pcond being true if any value of p is <= 0.
Then in logpmf line 3029:
return self._checkresult(result, npcond_, np.NAN)
results in logpmf and pmf returning nan!
Note that the actual result is calculated properly (line 3020, 2994-2995):
result = self._logpmf(x, n, p)
def _logpmf(self, x, n, p):
return gammaln(n+1) + np.sum(xlogy(x, p) - gammaln(x+1), axis=-1)
With your values:
import numpy as np
from scipy.special import xlogy, gammaln
x = np.array([3, 3, 0])
n = 6
p = np.array([2/3.0, 1/3.0, 0])
result = np.exp(gammaln(n+1) + np.sum(xlogy(x, p) - gammaln(x+1), axis=-1))
print(result)
>>>0.219478737997

Related

Issue with Python scipy optimize minimize fmin_slsqp solver

I start with the optimization function from scipy.
I tried to create my code by copying the Find optimal vector that minimizes function solution
I have an array that contains series in columns. I need to multiply each of them by a weight so that the sum of last row of these columns multiplied by the weights gives a given number (constraint).
The sum of the series multiplied by the weights gives a new series where I extract the max-draw-down and I want to minimize this mdd.
I wrote my code as best as I can (2 months of Python and 3 hours of scipy) and can't solve the error message on the function used to solve the problem.
Here is my code and any help would be much appreciated:
import numpy as np
from scipy.optimize import fmin_slsqp
# based on: https://stackoverflow.com/questions/41145643/find-optimal-vector-that-minimizes-function
# the number of columns (and so of weights) can vary; it should be generic, regardless the number of columns
def mdd(serie): # finding the max-draw-down of a series (put aside not to create add'l problems)
min = np.nanargmax(np.fmax.accumulate(serie) - serie)
max = np.nanargmax((serie)[:min])
return serie[np.nanargmax((serie)[:min])] - serie[min] # max-draw-down
# defining the input data
# mat is an array of 5 columns containing series of independent data
mat = np.array([[1, 0, 0, 1, 1],[2, 0, 5, 3, 4],[3, 2, 4, 3, 7],[4, 1, 3, 3.1, -6],[5, 0, 2, 5, -7],[6, -1, 4, 1, -8]]).astype('float32')
w = np.ndarray(shape=(5)).astype('float32') # 1D vector for the weights to be used for the columns multiplication
w0 = np.array([1/5, 1/5, 1/5, 1/5, 1/5]).astype('float32') # initial weights (all similar as a starting point)
fixed_value = 4.32 # as a result of constraint nb 1
# testing the operations that are going to be used in the minimization
series = np.sum(mat * w0, axis=1)
# objective:
# minimize the mdd of the series by modifying the weights (w)
def test(w, mat):
series = np.sum(mat * w, axis=1)
return mdd(series)
# constraints:
def cons1(last, w, fixed_value): # fixed_value = 4.32
# the sum of the weigths multiplied by the last value of each column must be equal to this fixed_value
return np.sum(mat[-1, :] * w) - fixed_value
def cons2(w): # the sum of the weights must be equal to 1
return np.sum(w) - 1
# solution:
# looking for the optimal set of weights (w) values that minimize the mdd with the two contraints and bounds being respected
# all w values must be between 0 and 1
result = fmin_slsqp(test, w0, f_eqcons=[cons1, cons2], bounds=[(0.0, 1.0)]*len(w), args=(mat, fixed_value, w0), full_output=True)
weights, fW, its, imode, smode = result
print(weights)
You weren't that far off the mark. The biggest problem lies in the mdd function: In case there is no draw-down, your function spits out an empty list as an intermediate result, which then can no longer cope with the argmax function.
def mdd(serie): # finding the max-draw-down of a series (put aside not to create add'l problems)
i = np.argmax(np.maximum.accumulate(serie) - serie) # end of the period
start = serie[:i]
# check if there is dd at all
if not start.any():
return 0
j = np.argmax(start) # start of period
return serie[j] - serie[i] # max-draw-down
In addition, you must make sure that the parameter list is the same for all functions involved (cost function and constraints).
# objective:
# minimize the mdd of the series by modifying the weights (w)
def test(w, mat,fixed_value):
series = mat # w
return mdd(series)
# constraints:
def cons1(w, mat, fixed_value): # fixed_value = 4.32
# the sum of the weigths multiplied by the last value of each column must be equal to this fixed_value
return mat[-1, :] # w - fixed_value
def cons2(w, mat, fixed_value): # the sum of the weights must be equal to 1
return np.sum(w) - 1
# solution:
# looking for the optimal set of weights (w) values that minimize the mdd with the two contraints and bounds being respected
# all w values must be between 0 and 1
result = fmin_slsqp(test, w0, eqcons=[cons1, cons2], bounds=[(0.0, 1.0)]*len(w), args=(mat,fixed_value), full_output=True)
One more remark: You can make the matrix-vector multiplications much leaner with the #-operator.

Find parameters from the Vasicek model

I am given the following bond:
and need to fit the Vasicek model to this data.
My attempt is the following:
# ... imports
years = np.array([1, 2, 3, 4, 7, 10])
pric = np.array([0, .93, .85, .78, .65, .55, .42])
X = sympy.symbols("a b sigma")
a, b, s = X
rt1_rt = np.diff(pric)
ab_rt = np.array([a*(b-r) for r in pric[1:] ])
term = rt1_rt - ab_rt
def normpdf(x, mean, sd):
var = sd**2
denom = (2*sym.pi*var)**.5
num = sym.E**(-(x-mean)**2/(2*var))
return num/denom
pdfs = np.array([sym.log(normpdf(x, 0, s)) for x in term])
func = 0
for el in pdfs:
func += el
func = func.factor()
lmd = sym.lambdify(X, func)
def target_fun(params):
return lmd(*params)
result = scipy.optimize.least_squares(target_fun, [10, 10, 10])
I don't think that it outputs correct solution.
Your code is almost correct.
You want to maximize your function, therefore you need to place minus sign in front of lmd in your function.
def target_fun(params):
return -lmd(*params)
Additionally, the initial values are usually set to less than 1. Picking 10 is not the best choice as the algorithm might converge to a saddle point.
Consider [0.01, 0.01, 0.01].

Find the point of intersection of two linear equations using Numpy

The objective is to find the point of intersection of two linear equations. These two linear equation are derived using the Numpy polyfit functions.
Given two time series (xLeft, yLeft) and (xRight, yRight), the linear least suqares fit to each of them was calculated using polyfit as shown below:
xLeft = [
6168, 6169, 6170, 6171, 6172, 6173, 6174, 6175, 6176, 6177,
6178, 6179, 6180, 6181, 6182, 6183, 6184, 6185, 6186, 6187
]
yLeft = [
0.98288751, 1.3639959, 1.7550986, 2.1539073, 2.5580614,
2.9651523, 3.3727503, 3.7784295, 4.1797948, 4.5745049,
4.9602985, 5.3350167, 5.6966233, 6.0432272, 6.3730989,
6.6846867, 6.9766307, 7.2477727, 7.4971657, 7.7240791
]
xRight = [
6210, 6211, 6212, 6213, 6214, 6215, 6216, 6217, 6218, 6219,
6220, 6221, 6222, 6223, 6224, 6225, 6226, 6227, 6228, 6229,
6230, 6231, 6232, 6233, 6234, 6235, 6236, 6237, 6238, 6239,
6240, 6241, 6242, 6243, 6244, 6245, 6246, 6247, 6248, 6249,
6250, 6251, 6252, 6253, 6254, 6255, 6256, 6257, 6258, 6259,
6260, 6261, 6262, 6263, 6264, 6265, 6266, 6267, 6268, 6269,
6270, 6271, 6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279,
6280, 6281, 6282, 6283, 6284, 6285, 6286, 6287, 6288]
yRight=[
7.8625913, 7.7713094, 7.6833806, 7.5997391, 7.5211883,
7.4483986, 7.3819046, 7.3221073, 7.2692747, 7.223547,
7.1849418, 7.1533613, 7.1286001, 7.1103559, 7.0982385,
7.0917811, 7.0904517, 7.0936642, 7.100791, 7.1111741,
7.124136, 7.1389918, 7.1550579, 7.1716633, 7.1881566,
7.2039142, 7.218349, 7.2309117, 7.2410989, 7.248455,
7.2525721, 7.2530937, 7.249711, 7.2421637, 7.2302341,
7.213747, 7.1925621, 7.1665707, 7.1356878, 7.0998487,
7.0590014, 7.0131001, 6.9621005, 6.9059525, 6.8445964,
6.7779589, 6.7059474, 6.6284504, 6.5453324, 6.4564347,
6.3615761, 6.2605534, 6.1531439, 6.0391097, 5.9182019,
5.7901659, 5.6547484, 5.5117044, 5.360805, 5.2018456,
5.034656, 4.8591075, 4.6751242, 4.4826899, 4.281858,
4.0727611, 3.8556159, 3.6307325, 3.3985188, 3.1594861,
2.9142516, 2.6635408, 2.4081881, 2.1491354, 1.8874279,
1.6242117,1.3607255,1.0982931,0.83831298
]
left_line = np.polyfit(xleft, yleft, 1)
right_line = np.polyfit(xRight, yRight, 1)
In this case, polyfit outputs the coeficients m and b for y = mx + b, respectively.
The intersection of the two linear equations then can be calculated as follows:
x0 = -(left_line[1] - right_line[1]) / (left_line[0] - right_line[0])
y0 = x0 * left_line[0] + left_line[1]
However, I wonder whether there exist Numpy build-in approach to calculate the last two steps?
Not exactly a built-in approach, but you can simplify the problem. Say I have lines given my y = m1 * x + b1 and y = m2 * x + b2. You can trivially find an equation for the difference, which is also a line:
y = (m1 - m2) * x + (b1 - b2)
Notice that this line will have a root at the intersection of the two original lines, if they intersect. You can use the numpy.polynomial.Polynomial class to perform these operations:
>>> (np.polynomial.Polynomial(left_line[::-1]) - np.polynomial.Polynomial(right_line[::-1])).roots()
array([6192.0710885])
Notice that I had to swap the order of the coefficients, since Polynomial expects smallest to largest, while np.polyfit returns the opposite. In fact, np.polyfit is not recommended. Instead, you can get Polynomial obejcts directly using np.polynomial.Polynomial.fit class method. Your code would then look like:
left_line = np.polynomial.Polynomial.fit(xLeft, yLeft, 1, domain=[-1, 1])
right_line = np.polynomial.Polynomial.fit(xRight, yRight, 1, domain=[-1, 1])
x0 = (left_line - right_line).roots()
y0 = left_line(x0)
The domain is mapped to the window [-1, 1]. If you do not specify a domain, the peak-to-peak of the x-values will be used instead. You do not want this, since it will result in a mapping of the input values. Instead, we explicitly specify that the domain [-1, 1] maps to the same window. An alternative would be to use the default domain and set e.g. window=[xLeft.min(), xLeft.max()]. The problem with this approach is that it would then create different domains for the polynomials, preventing the operation left_line - right_line.
See https://numpy.org/doc/stable/reference/routines.polynomials.classes.html for more information.
You can model it as a linear system and use simple linear algebra:
def get_intersection(m1,b1,m2,b2):
A = np.array([[-m1, 1], [-m2, 1]])
b = np.array([[b1], [b2]])
# you have to solve linear System AX = b where X = [x y]'
X = np.linalg.pinv(A) # b
x, y = np.round(np.squeeze(X), 4)
return x, y # returns point of intersection (x,y) with 4 decimal precision
m1,b1,m2,b2 = left_line(0), left_line(1), right_line(0), right_line(1)
print(get_intersection(m1,b1,m2,b2))
As an example, for lines y - x = 1, and y + x = 1, we expect the intersection as (0,1):
m1,b1,m2,b2 = 1, 1, -1, 1
print(get_intersection(m1,b1,m2,b2))
Output: (0.0, 1.0) as expected.

numpy - do operation along specified axis

So I want to implement a matrix standardisation method.
To do that, I've been told to
subtract the mean and divide by the standard deviation for each dimension
And to verify:
after this processing, each dimension has zero mean and unit variance.
That sounds simple enough ...
import numpy as np
def standardize(X : np.ndarray,inplace=True,verbose=False,check=False):
ret = X
if not inplace:
ret = X.copy()
ndim = np.ndim(X)
for d in range(ndim):
m = np.mean(ret,axis=d)
s = np.std(ret,axis=d)
if verbose:
print(f"m{d} =",m)
print(f"s{d} =",s)
# TODO: handle zero s
# TODO: subtract m along the correct axis
# TODO: divide by s along the correct axis
if check:
means = [np.mean(X,axis=d) for d in range(ndim)]
stds = [np.std(X,axis=d) for d in range(ndim)]
if verbose:
print("means=\n",means)
print("stds=\n",stds)
assert all(all(m < 1e-15 for m in mm) for mm in means)
assert all(all(s == 1.0 for s in ss) for ss in stds)
return ret
e.g. for ndim == 2, we could get something like
A=
[[ 0.40923704 0.91397416 0.62257397]
[ 0.15614258 0.56720836 0.80624135]]
m0 = [ 0.28268981 0.74059126 0.71440766] # can broadcast with ret -= m0
s0 = [ 0.12654723 0.1733829 0.09183369] # can broadcast with ret /= s0
m1 = [ 0.33333333 -0.33333333] # ???
s1 = [ 0.94280904 0.94280904] # ???
How do I do that?
Judging by Broadcast an operation along specific axis in python , I thought I may be looking for a way to create
m[None, None, None, .., None, : , None, None, .., None]
Where there is exactly one : at index d.
But even if I knew how to do that, I'm not sure it'd work.
You can swap your axes such that the first axes is the one you want to normalize. This should also work inplace, since swapaxes just returns a view on your data.
Using the numpy command swapaxes:
for d in range(ndim):
m = np.mean(ret,axis=d)
s = np.std(ret,axis=d)
ret = np.swapaxes(ret, 0, d)
# Perform Normalisation of Axis
ret -= m
ret /= s
ret = np.swapaxes(ret, 0, d)

how to write symbol for sum over a variable's subscript in sympy

I want to write a sympy symbol for a summation, but the index summed over also appears as the subscript of a variable name in the summand. For example,
import numpy as np
import sympy
sympy.init_printing()
r = sympy.Symbol('r')
a = sympy.Matrix(sympy.symbols('a:4'))
rpowers = sympy.Matrix([r**i for i in range(len(a))])
long_expr = a.dot(rpowers)
n = sympy.Symbol('n')
a_n = sympy.Symbol('a_n')
short_expr = sympy.Sum(a_n * r**n, (n, 0, 3))
long_expr and short_expr denote the same thing mathematically. But with long_expr, I can substitute in the values for the a's and then lambdify that expression into a numpy function:
coeffed_long_expr = long_expr.subs(zip(a, [-1, 3, 23, 8]))
func_long_expr = sympy.lambdify([r], coeffed_long_expr, 'numpy')
How can I do the same with short_expr? Or is short_expr only useful for displaying the expression with a summation sign in this case? I would like to be able to display using the summation sign, especially for large ns.
You can accomplish this by using sympy.Function:
import sympy
a_seq = [-1, 3, 23, 8]
n, r = sympy.symbols('n, r')
a_n = sympy.Function('a')(n)
terms = 4
short_expr = sympy.Sum(a_n * r**n, (n, 0, terms - 1))
coeffed_short_expr = short_expr.doit().subs(
(a_n.subs(n, i), a_seq[i]) for i in range(terms)) # 8*r**3 + 23*r**2 + 3*r - 1
func_short_expr = sympy.lambdify(r, coeffed_short_expr, 'numpy')
If you wish for a cleaner, more efficient implementation, I suspect you may be able to define a subclass of sympy.Symbol that implements subs() properly for summations.

Categories