Using numpy digitize output in scipy minimize problem - python

I am trying to minimize the quadratic weighted kappa function using scipy minimize fmin Powell function.
The two functions digitize_train and digitize_train2 gives 100% EXACT same results.
However, when I tried to use these functions with scipy minimize the second method fails.
I have been trying to debug the problem for hours, to my surprise despite the two functions being exact same the bumpy digitize function fails to give fmin Powell mimimization.
How to fix the error?
Question
How to use numpy.digitize in scipy fmin_powell?
SETUP
# imports
import numpy as np
import pandas as pd
import seaborn as sns
from scipy.optimize import fmin_powell
from sklearn import metrics
# data
train_labels = [1,1,8,7,6,5,3,2,4,4]
train_preds = [0.1,1.2,8.9, 7.6, 5.5, 5.5, 2.99, 2.4, 3.5, 4.0]
guess_lst = (1.5,2.9,3.1,4.5,5.5,6.1,7.1)
# functions
# here I am trying the convert real numbers -inf to +inf to integers 1 to 8
def digitize_train(train_preds, guess_lst):
(x1,x2,x3,x4,x5,x6,x7) = list(guess_lst)
res = []
for y in list(train_preds):
if y < x1:
res.append(1)
elif y < x2:
res.append(2)
elif y < x3:
res.append(3)
elif y < x4:
res.append(4)
elif y < x5:
res.append(5)
elif y < x6:
res.append(6)
elif y < x7:
res.append(7)
else: res.append(8)
return res
def digitize_train2(train_preds, guess_lst):
return np.digitize(train_preds,guess_lst) + 1
# compare two functions
df = pd.DataFrame({'train_labels': train_labels,
'train_preds': train_preds,
'method_1': digitize_train(train_preds, guess_lst),
'method_2': digitize_train2(train_preds, guess_lst)
})
df
** NOTE: The two functions are exact same**
Method 1: without numpy digitize runs fine
# using fmin_powel for method 1
def get_offsets_minimizing_train_preds_kappa(guess_lst):
res = digitize_train(train_preds, guess_lst)
return - metrics.cohen_kappa_score(train_labels, res,weights='quadratic')
offsets = fmin_powell(get_offsets_minimizing_train_preds_kappa, guess_lst, disp = True)
print(offsets)
Method 2: using numpy digitize fails
# using fmin_powell for method 2
def get_offsets_minimizing_train_preds_kappa2(guess_lst):
res = digitize_train2(train_preds, guess_lst)
return -metrics.cohen_kappa_score(train_labels, res,weights='quadratic')
offsets = fmin_powell(get_offsets_minimizing_train_preds_kappa2, guess_lst, disp = True)
print(offsets)
How to use numpy digitize method?
Update
As per suggestions I tried pandas cut, but still gives error.
ValueError: bins must increase monotonically.
# using fmin_powell for method 3
def get_offsets_minimizing_train_preds_kappa3(guess_lst):
res = pd.cut(train_preds, bins=[-np.inf] + list(guess_lst) + [np.inf],
right=False)
res = pd.Series(res).cat.codes + 1
res = res.to_numpy()
return -metrics.cohen_kappa_score(train_labels, res,weights='quadratic')
offsets = fmin_powell(get_offsets_minimizing_train_preds_kappa3, guess_lst, disp = True)
print(offsets)

It seems that during the minimization process, the value in guest_lst are not monotonically increasing anymore, one work around is to pass the sorted of guest_lst in digitize like:
def digitize_train2(train_preds, guess_lst):
return np.digitize(train_preds,sorted(guess_lst)) + 1
and you get
# using fmin_powell for method 2
def get_offsets_minimizing_train_preds_kappa2(guess_lst):
res = digitize_train2(train_preds, guess_lst)
return -metrics.cohen_kappa_score(train_labels, res,weights='quadratic')
offsets = fmin_powell(get_offsets_minimizing_train_preds_kappa2, guess_lst, disp = True)
print(offsets)
Optimization terminated successfully.
Current function value: -0.990792
Iterations: 2
Function evaluations: 400
[1.5 2.7015062 3.1 4.50379942 4.72643334 8.12463415
7.13652301]

Related

Solve coupled differential equation using the function scipy.integrate.RK45

x' = f(x,y,t)
y' = g(x,y,t)
Initial conditions have been given as x0 and y0 with t0. Find the solution graph in the range t0 to a.
I have tried doing this for non-coupled equations but there seems to be a problem there as well. I have to solve this exactly using this function so other functions are not the options.
from numpy import *
from matplotlib import pyplot as plt
def f(t,x):
return -x
import scipy
from scipy import integrate as inte
solution = inte.RK45(f, 0 , [1] , 10 ,1, 0.001, e**-6)
print (solution)
I expect the output to be an array of all the values.
But <scipy.integrate._ivp.rk.RK45 at 0x1988ba806d8> is what I get.
You need to collect data with calling step() function:
from math import e
from scipy import integrate as inte
def f(t,x):
return -x
solution = inte.RK45(f, 0 , [1] , 10 ,1, 0.001, e**-6)
# collect data
t_values = []
y_values = []
for i in range(100):
# get solution step state
solution.step()
t_values.append(solution.t)
y_values.append(solution.y[0])
# break loop after modeling is finished
if solution.status == 'finished':
break
data = zip(t_values, y_values)
Output:
(0.12831714796342164, 0.879574381033538)
(1.1283171479634215, 0.3239765636806864)
(2.1283171479634215, 0.11933136762238628)
(3.1283171479634215, 0.043953720407578944)
(4.128317147963422, 0.01618962035012491)
(5.128317147963422, 0.005963176828962677)
(6.128317147963422, 0.002196436798667919)
(7.128317147963422, 0.0008090208875093502)
(8.128317147963422, 0.00029798936023261037)
(9.128317147963422, 0.0001097594143523445)
(10, 4.5927433621121034e-05)

Python - How to plot argument in integral that is not the value being integrated

I want to integrate a function that has no closed form solution with an unknown variable and then plot vs the unknown variable. To try a simpler test, I tried to use the integral of f(x,c) = (x^2+c), integrated with respect to x and plot with different values of c. However, the code below gets the error
only size-1 arrays can be converted to Python scalars
even though the integral of a number, e.g. integral(5), seems to return the correct scalar value.
import numpy as np
import matplotlib.pyplot as plt
from scipy import integrate
def f(x,c):
return x**2+c
def integral(c):
return integrate.quad(f,0,10, args = (c,))[0]
y = np.linspace(0,20,200)
plt.plot(y, integral(y))
You pass a numpy array as the argument c while you wanted to integrate over x for all the items of c. Therefore you can use this:
def f(x,c):
return x**2+c
def integrate_f(c):
result = np.zeros(len(c))
counter = 0
for item in c:
result[counter] = integrate.quad(f,0,10, args = (item))[0]
counter +=1
return result
c_array = np.linspace(0,1,200)
plt.plot(c_array, integrate_f(c_array))
onno was a bit faster. But here is my similar solution. You need to loop over all the different c:
import numpy as np
import matplotlib.pyplot as plt
from scipy import integrate
def f(x,c):
return x**2+c
def getIntegral(c_list):
result = []
for c in c_list:
integral = integrate.quad(f,0,10,args = c)[0]
result.append(integral)
return result
if __name__ == "__main__":
c_list = np.linspace(0,20,200)
plt.plot(c_list, getIntegral(c_list))
plt.show()

NLopt minimize eigenvalue, Python

I have matrices where elements can be defined as arithmetic expressions and have written Python code to optimise parameters in these expressions in order to minimize particular eigenvalues of the matrix. I have used scipy to do this, but was wondering if it is possible with NLopt as I would like to try a few more algorithms which it has (derivative free variants).
In scipy I would do something like this:
import numpy as np
from scipy.linalg import eig
from scipy.optimize import minimize
def my_func(x):
y, w = x
arr = np.array([[y+w,-2],[-2,w-2*(w+y)]])
ev, ew=eig(arr)
return ev[0]
x0 = np.array([10, 3.45]) # Initial guess
minimize(my_func, x0)
In NLopt I have tried this:
import numpy as np
from scipy.linalg import eig
import nlopt
def my_func(x,grad):
arr = np.array([[x[0]+x[1],-2],[-2,x[1]-2*(x[1]+x[0])]])
ev, ew=eig(arr)
return ev[0]
opt = nlopt.opt(nlopt.LN_BOBYQA, 2)
opt.set_lower_bounds([1.0,1.0])
opt.set_min_objective(my_func)
opt.set_xtol_rel(1e-7)
x = opt.optimize([10.0, 3.5])
minf = opt.last_optimum_value()
print "optimum at ", x[0],x[1]
print "minimum value = ", minf
print "result code = ", opt.last_optimize_result()
This returns:
ValueError: nlopt invalid argument
Is NLopt able to process this problem?
my_func should return double, posted sample return complex
print(type(ev[0]))
None
<class 'numpy.complex128'>
ev[0]
(13.607794065928395+0j)
correct version of my_func:
def my_func(x, grad):
arr = np.array([[x[0]+x[1],-2],[-2,x[1]-2*(x[1]+x[0])]])
ev, ew=eig(arr)
return ev[0].real
updated sample returns:
optimum at [ 1. 1.]
minimum value = 2.7015621187164243
result code = 4

Python SciPy linprog optimization fails with status 3

Trying to minimize a simple linear function with linprog. The coefficients are the elements of arr2 multiplied by -1. There are only inequality constraints for each variable, such as -1 <= x1 <= 1, -2 <= x2 <= 2 and so on.
If a choose not to specify bounds in linprog:
from scipy.optimize import linprog
import numpy as np
import pandas as pd
numdim = 28
arr1 = np.ones(numdim)
arr1 = - arr1
arr2 = np.array([
19.53,
128.97,
3538,
931.8,
0.1825,
150.88,
10315,
0.8109,
3.9475,
3022,
31.77,
10323,
110.93,
220,
2219.5,
119.2,
703.6,
616,
338,
84.67,
151.13,
111.28,
29.515,
29.67,
158800,
167.15,
0.06802,
1179
])
constr_a = []
for i in range(numdim):
constr_default = np.zeros(numdim)
constr_default[i] = 1
constr_a.append(constr_default)
for i in range(numdim):
constr_default = np.zeros(numdim)
constr_default[i] = -1
constr_a.append(constr_default)
constr_a = np.asarray(constr_a)
constr_b = np.arange(1, 2*numdim + 1, 1)
constr_b[numdim:] = constr_b[:numdim]
print linprog(np.transpose(arr1 * arr2), constr_a, constr_b, bounds=(None, None))
I get the following result:
fun: -4327476.2887400016
message: 'Optimization failed. The problem appears to be unbounded.'
status: 3
I've tried changing the last row to:
print linprog(np.transpose(arr1 * arr2), constr_a, constr_b, bounds=(-1000, 1000))
The numbers specified as bounds are random. The output is:
fun: -4327476.2887400296
message: 'Optimization terminated successfully.'
status: 0
which gives us a slightly different result and the desired status.
My question is, do I misuse the library and in which way? Which answer is correct? This code was expected to work without specifying the 'bounds' parameter. I cannot use this parameter because these simple constraints are unique for each variable.
I use python 2.7 and scipy 0.17.1. Big thanks in advance.
Upd
constr_a should be a matrix according to the documentation (https://docs.scipy.org/doc/scipy/reference/optimize.linprog-simplex.html) and actually is in the code. To be sure the syntax is correct, we can cut the number of dimensions to 2:
from scipy.optimize import linprog
import numpy as np
import pandas as pd
numdim = 2
arr1 = np.ones(numdim)
arr1 = - arr1
arr2 = np.array([
19.53,
128.97
])
constr_a = []
for i in range(numdim):
constr_default = np.zeros(numdim)
constr_default[i] = 1
constr_a.append(constr_default)
for i in range(numdim):
constr_default = np.zeros(numdim)
constr_default[i] = -1
constr_a.append(constr_default)
constr_a = np.asarray(constr_a)
constr_b = np.arange(1, 2*numdim + 1, 1)
constr_b[numdim:] = constr_b[:numdim]
print constr_a
print constr_b
print linprog(np.transpose(arr1 * arr2), constr_a, constr_b, bounds=(None, None))
and this will work.
the constr_a list is not properly formed. It is an array of array's instead of being an array of scalar. This might be leading to a improper lower bound causing the optimization to fail.
Perhaps
constr_a.append(constr_default)
should be
constr_a.append(constr_default[i])
inspect both the bound arrays to make sure they have proper form and values.

R's auto.arima() equivalent in Python

I would like to implement equivalent of auto.arima() function of R in python.
In R auto.arima function takes time series values as input computes ARIMA order parameters (p,d,q values) and fits a model, there is no need to provide p,d,q values as inputs by the user.
I want to use the equivalent of auto.arima function in python (without calling auto.arima R) to predict future values in a time series. In the following time series executing auto.arima-python for 40 points and predicting next 6 values, then moving the window by 1 point and again performing the same procedure.
Following is exemplary data:
value
0
2.584751
2.884758
2.646735
2.882105
3.267503
3.94552
4.70788
5.384803
54.77972
62.87139
78.68957
112.7166
155.0074
170.8084
196.1941
237.4928
254.9718
175.0717
217.3807
244.7357
274.4517
304.6838
373.3202
345.6252
461.2653
443.5982
472.3653
469.3326
506.8819
532.1639
542.2837
514.9269
528.0194
540.539
542.7031
556.8262
569.7132
576.2339
577.7212
577.0873
569.6199
573.2445
573.7825
589.3506
I have tried to write functions to compute order of differencing using AD Fuller Test, passing differentiated time series (which becomes stationary after differencing original time series as per the adfuller test result) to arma order select function to compute P,Q order values.
Further use these values to pass on to the arima function in Statsmodels. But the functions do not seem to work.
import numpy as np
import pandas as pd
import statsmodels.api as sm
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.stattools import acf, pacf
def diff_terms(timeseries):
i=1
j=0
while i != 0:
dftest = adfuller(timeseries, autolag='AIC')
if dftest[0] <= dftest[4]["5%"]:
i = 0
else:
timeseries = np.diff(timeseries)
i = 1
j = j + 1
return j
def p_q_values_estimator(timeseries):
p=0
q=0
lag_acf = acf(timeseries, nlags=20)
lag_pacf = pacf(timeseries, nlags=20, method='ols')
y=1.96/np.sqrt(len(timeseries))
if lag_acf[0] < y:
for a in lag_acf:
if a < y:
q = q + 1
break
elif lag_acf[0] > y:
for c in lag_acf:
if c > y:
q = q + 1
break
if lag_pacf[0] < y:
for b in lag_pacf:
if b < y:
p = p + 1
break
elif lag_pacf[0] > y:
for d in lag_pacf:
if d > y:
p = p + 1
break
p_q=[p,q]
return(p_q)
def p_q_values_estimator2(timeseries):
res = sm.tsa.arma_order_select_ic(timeseries, ic=['aic'], max_ar=5, max_ma=4,trend='nc')
return res.aic_min_order
data1=[]
data=pd.read_csv('ABC.csv')
d_value=diff_terms(data.value)
data1[:]=data[:]
data = data[0:40]
i=0
while i < d_value:
data_diff = np.diff(data)
i = i+1
p_q_values=p_q_values_estimator(data)
p_value=p_q_values[0]
q_value=p_q_values[1]
p_q_values2=p_q_values_estimator2(data_diff)
p_value2=p_q_values2[0]
q_value2=p_q_values2[1]
exogx = np.array(range(0,40))
fit2 = sm.tsa.ARIMA(np.array(data), (p_value, d_value, q_value), exog = exogx).fit()
print(fit2.fittedvalues)
pred2 = fit2.predict(start = 40, end = 45, exog = np.array(range(40,46)))
print(pred2)
plt.plot(fit2.fittedvalues)
plt.plot(np.array(data))
plt.plot(range(40,45), np.array(pred2))
plt.show()
Errors – on using arma order select
p_q_values2=p_q_values_estimator2(data_diff)
line 56, in p_q_values_estimator2
res = sm.tsa.arma_order_select_ic(timeseries, ic=['aic'], max_ar=5, max_ma=4,trend='nc')
File "C:\Python27\lib\site-packages\statsmodels\tsa\stattools.py", line 1052, in arma_order_select_ic min_res.update({i + '_min_order' : (mins[0][0], mins[1][0])})
IndexError: index 0 is out of bounds for axis 0 with size 0
Errors – on using acf pacf based function for computation of P,Q order
fit2 = sm.tsa.ARIMA(np.array(data), (p_value, d_value, q_value), exog = exogx).fit()
File "C:\Python27\lib\site-packages\statsmodels\tsa\arima_model.py", line 1104, in fit
callback, **kwargs)
File "C:\Python27\lib\site-packages\statsmodels\tsa\arima_model.py", line 942, in fit
armafit.mle_retvals = mlefit.mle_retvals
AttributeError: 'LikelihoodModelResults' object has no attribute 'mle_retvals'
Vals is my own thing, but you can create your own index with pd.date_range
rdata=ts(traindf.requests_per_active.values,frequency=12)
#forecasts
fit=forecast.auto_arima(rdata)
forecast_output=forecast.forecast(fit,h=6,level=(95.0))
#convert forecasts to dataframe
forecast_results=pd.Series(forecast_output[3], index=vals.index)
lowerpi=pd.Series(forecast_output[4], index=vals.index)
upperpi=pd.Series(forecast_output[5], index=vals.index)
results = pd.DataFrame({'forecast' : forecast_results, 'lowerpi' : lowerpi, 'upperpi' : upperpi})
You can use pyramid-arima library. It bring auto.arima() of R language to python.It wraps "statsmodels.tsa.ARIMA and statsmodels.tsa.statespace.SARIMAX into one estimator class" (per https://pypi.org/project/pyramid-arima/).

Categories