x' = f(x,y,t)
y' = g(x,y,t)
Initial conditions have been given as x0 and y0 with t0. Find the solution graph in the range t0 to a.
I have tried doing this for non-coupled equations but there seems to be a problem there as well. I have to solve this exactly using this function so other functions are not the options.
from numpy import *
from matplotlib import pyplot as plt
def f(t,x):
return -x
import scipy
from scipy import integrate as inte
solution = inte.RK45(f, 0 , [1] , 10 ,1, 0.001, e**-6)
print (solution)
I expect the output to be an array of all the values.
But <scipy.integrate._ivp.rk.RK45 at 0x1988ba806d8> is what I get.
You need to collect data with calling step() function:
from math import e
from scipy import integrate as inte
def f(t,x):
return -x
solution = inte.RK45(f, 0 , [1] , 10 ,1, 0.001, e**-6)
# collect data
t_values = []
y_values = []
for i in range(100):
# get solution step state
solution.step()
t_values.append(solution.t)
y_values.append(solution.y[0])
# break loop after modeling is finished
if solution.status == 'finished':
break
data = zip(t_values, y_values)
Output:
(0.12831714796342164, 0.879574381033538)
(1.1283171479634215, 0.3239765636806864)
(2.1283171479634215, 0.11933136762238628)
(3.1283171479634215, 0.043953720407578944)
(4.128317147963422, 0.01618962035012491)
(5.128317147963422, 0.005963176828962677)
(6.128317147963422, 0.002196436798667919)
(7.128317147963422, 0.0008090208875093502)
(8.128317147963422, 0.00029798936023261037)
(9.128317147963422, 0.0001097594143523445)
(10, 4.5927433621121034e-05)
Related
I am trying to minimize the quadratic weighted kappa function using scipy minimize fmin Powell function.
The two functions digitize_train and digitize_train2 gives 100% EXACT same results.
However, when I tried to use these functions with scipy minimize the second method fails.
I have been trying to debug the problem for hours, to my surprise despite the two functions being exact same the bumpy digitize function fails to give fmin Powell mimimization.
How to fix the error?
Question
How to use numpy.digitize in scipy fmin_powell?
SETUP
# imports
import numpy as np
import pandas as pd
import seaborn as sns
from scipy.optimize import fmin_powell
from sklearn import metrics
# data
train_labels = [1,1,8,7,6,5,3,2,4,4]
train_preds = [0.1,1.2,8.9, 7.6, 5.5, 5.5, 2.99, 2.4, 3.5, 4.0]
guess_lst = (1.5,2.9,3.1,4.5,5.5,6.1,7.1)
# functions
# here I am trying the convert real numbers -inf to +inf to integers 1 to 8
def digitize_train(train_preds, guess_lst):
(x1,x2,x3,x4,x5,x6,x7) = list(guess_lst)
res = []
for y in list(train_preds):
if y < x1:
res.append(1)
elif y < x2:
res.append(2)
elif y < x3:
res.append(3)
elif y < x4:
res.append(4)
elif y < x5:
res.append(5)
elif y < x6:
res.append(6)
elif y < x7:
res.append(7)
else: res.append(8)
return res
def digitize_train2(train_preds, guess_lst):
return np.digitize(train_preds,guess_lst) + 1
# compare two functions
df = pd.DataFrame({'train_labels': train_labels,
'train_preds': train_preds,
'method_1': digitize_train(train_preds, guess_lst),
'method_2': digitize_train2(train_preds, guess_lst)
})
df
** NOTE: The two functions are exact same**
Method 1: without numpy digitize runs fine
# using fmin_powel for method 1
def get_offsets_minimizing_train_preds_kappa(guess_lst):
res = digitize_train(train_preds, guess_lst)
return - metrics.cohen_kappa_score(train_labels, res,weights='quadratic')
offsets = fmin_powell(get_offsets_minimizing_train_preds_kappa, guess_lst, disp = True)
print(offsets)
Method 2: using numpy digitize fails
# using fmin_powell for method 2
def get_offsets_minimizing_train_preds_kappa2(guess_lst):
res = digitize_train2(train_preds, guess_lst)
return -metrics.cohen_kappa_score(train_labels, res,weights='quadratic')
offsets = fmin_powell(get_offsets_minimizing_train_preds_kappa2, guess_lst, disp = True)
print(offsets)
How to use numpy digitize method?
Update
As per suggestions I tried pandas cut, but still gives error.
ValueError: bins must increase monotonically.
# using fmin_powell for method 3
def get_offsets_minimizing_train_preds_kappa3(guess_lst):
res = pd.cut(train_preds, bins=[-np.inf] + list(guess_lst) + [np.inf],
right=False)
res = pd.Series(res).cat.codes + 1
res = res.to_numpy()
return -metrics.cohen_kappa_score(train_labels, res,weights='quadratic')
offsets = fmin_powell(get_offsets_minimizing_train_preds_kappa3, guess_lst, disp = True)
print(offsets)
It seems that during the minimization process, the value in guest_lst are not monotonically increasing anymore, one work around is to pass the sorted of guest_lst in digitize like:
def digitize_train2(train_preds, guess_lst):
return np.digitize(train_preds,sorted(guess_lst)) + 1
and you get
# using fmin_powell for method 2
def get_offsets_minimizing_train_preds_kappa2(guess_lst):
res = digitize_train2(train_preds, guess_lst)
return -metrics.cohen_kappa_score(train_labels, res,weights='quadratic')
offsets = fmin_powell(get_offsets_minimizing_train_preds_kappa2, guess_lst, disp = True)
print(offsets)
Optimization terminated successfully.
Current function value: -0.990792
Iterations: 2
Function evaluations: 400
[1.5 2.7015062 3.1 4.50379942 4.72643334 8.12463415
7.13652301]
I'm using scipy.integrate.solve_ivp to solve a system of ODEs because it has the event functions.
The reason why I need this function is that during the integration sometimes I get a singular matrix, and everytime that happens I need to finish the integration and restart it with new parameters.
I would like to know if is possible to restart the scipy.integrate.solve_ivp with new parameters after a terminal event has occurred, and if so how could I do it.
Any help would be very much appreciated.
This is my current script based on an example from
https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from animate_plot import animate
def upward_cannon(t, y):
return [y[1], -0.5]
def hit_ground(t, y):
return y[0]
def apex(t,y):
return y[1]
hit_ground.terminal = True
hit_ground.direction = -1
t0 = 0
tf = 10
sol = solve_ivp(upward_cannon, t_span=[t0, tf], y0=[90, 10], t_eval=np.arange(t0,tf, 0.01), events=[hit_ground,apex],
dense_output=True)
linesData = { 1: [[-0.0, 0.0],[0.0, 0.0]]}#,
# 2: [[-0.5, 0],[0.5, 0.0]]}#, 3: [[-0.5, 0],[0.5, 0]]}
pointsofInterest = {}#3: [[0.5, 0.0]]}#, 2: [[180.0, 10]]}
model_markers = np.array([])
plot_title = 'Upward Particle'
plot_legend = ['Forward Dynamics']
q_rep = sol.y.T[:,0]
fig = plt.figure()
ax = fig.add_subplot(111)
xs = np.arange(t0, tf, 0.01)
for idx in range(0,q_rep.shape[0]): #looping statement;declare the total number of frames
y=q_rep[idx] # traveling Sine wave
ax.cla()
ax.scatter(xs[idx],y, s=50)
plt.ylim([-10, 190])
plt.xlim([-100, 100])
plt.pause(0.001)
plt.show()
Thank you in advance.
Kind Regards
You have two options, both are recursive.
Option 1: Write the function to call itself inside of the script. This would be true recursion and elegant.
Option 2: If your function comes across these values you need to resolve, use argparsing and os to call the function with specified values.
Example:
os.system(python3 filename.py -f argparseinputs)
I wrote a piece code to make a simple linear regression model using Python. However, I am having trouble getting the correct cost function, and most importantly the correct theta parameters. The model is implemented from scratch and not using Scikit learn module. I have used Andrew NG's notes from his ML Coursera course to create the model. The correct values of theta are [[-3.630291] [1.166362]].
Would be really grateful if someone could offer their expertise, and point out what I'm doing wrong.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#Load The Dataset
dataset = pd.read_csv("Population vs Profit.txt",names=["Population" ,
"Profit"])
print (dataset.head())
col = len(dataset.columns)
x = dataset.iloc[:,:col-1].values
y = dataset.iloc[:,col-1].values
#Visualizing The Dataset
plt.scatter(x, y, color="red", marker="x", label="Profit")
plt.title("Population vs Profit")
plt.xlabel("Population")
plt.ylabel("Profit")
plt.legend()
plt.show()
#Preprocessing Data
dataset.insert(0,"x0",1)
col = len(dataset.columns)
x = dataset.iloc[:,:col-1].values
b = np.zeros(col-1)
m = len(y)
costlist = []
alpha = 0.001
iteration = 10000
#Defining Functions
def hypothesis(x,b,y):
h = x.dot(b.T) - y
return h
def cost(x,b,y,m):
j = np.sum(hypothesis(x,b,y)**2)
j = j/(2*m)
return j
print (cost(x,b,y,m))
def gradient_descent(x,b,y,m,alpha):
for i in range (iteration):
h = hypothesis(x,b,y)
product = np.sum(h.dot(x))
b = b - ((alpha/m)*product)
costlist.append(cost(x,b,y,m))
return b,cost(x,b,y,m)
b , mincost = gradient_descent(x,b,y,m,alpha)
print (b , mincost)
print (cost(x,b,y,m))
plt.plot(b,color="green")
plt.show()
The dataset I'm using is the following text.
6.1101,17.592
5.5277,9.1302
8.5186,13.662
7.0032,11.854
5.8598,6.8233
8.3829,11.886
7.4764,4.3483
8.5781,12
6.4862,6.5987
5.0546,3.8166
5.7107,3.2522
14.164,15.505
5.734,3.1551
8.4084,7.2258
5.6407,0.71618
5.3794,3.5129
6.3654,5.3048
5.1301,0.56077
6.4296,3.6518
7.0708,5.3893
6.1891,3.1386
20.27,21.767
5.4901,4.263
6.3261,5.1875
5.5649,3.0825
18.945,22.638
12.828,13.501
10.957,7.0467
13.176,14.692
22.203,24.147
5.2524,-1.22
6.5894,5.9966
9.2482,12.134
5.8918,1.8495
8.2111,6.5426
7.9334,4.5623
8.0959,4.1164
5.6063,3.3928
12.836,10.117
6.3534,5.4974
5.4069,0.55657
6.8825,3.9115
11.708,5.3854
5.7737,2.4406
7.8247,6.7318
7.0931,1.0463
5.0702,5.1337
5.8014,1.844
11.7,8.0043
5.5416,1.0179
7.5402,6.7504
5.3077,1.8396
7.4239,4.2885
7.6031,4.9981
6.3328,1.4233
6.3589,-1.4211
6.2742,2.4756
5.6397,4.6042
9.3102,3.9624
9.4536,5.4141
8.8254,5.1694
5.1793,-0.74279
21.279,17.929
14.908,12.054
18.959,17.054
7.2182,4.8852
8.2951,5.7442
10.236,7.7754
5.4994,1.0173
20.341,20.992
10.136,6.6799
7.3345,4.0259
6.0062,1.2784
7.2259,3.3411
5.0269,-2.6807
6.5479,0.29678
7.5386,3.8845
5.0365,5.7014
10.274,6.7526
5.1077,2.0576
5.7292,0.47953
5.1884,0.20421
6.3557,0.67861
9.7687,7.5435
6.5159,5.3436
8.5172,4.2415
9.1802,6.7981
6.002,0.92695
5.5204,0.152
5.0594,2.8214
5.7077,1.8451
7.6366,4.2959
5.8707,7.2029
5.3054,1.9869
8.2934,0.14454
13.394,9.0551
5.4369,0.61705
One issue is with your "product". It is currently a number when it should be a vector. I was able to get the values [-3.24044334 1.12719788] by rerwitting your for-loop as follows:
def gradient_descent(x,b,y,m,alpha):
for i in range (iteration):
h = hypothesis(x,b,y)
#product = np.sum(h.dot(x))
xvalue = x[:,1]
product = h.dot(xvalue)
hsum = np.sum(h)
b = b - ((alpha/m)* np.array([hsum , product]) )
costlist.append(cost(x,b,y,m))
return b,cost(x,b,y,m)
There's possibly another issue besides this as it doesn't converge to your answer. You should make sure you are using the same alpha also.
I need to input chi_square function and got stuck because it always shows there is a invalid syntax when run it, wonder how should I write the script? And how do I input "v"?
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
data = np.loadtxt("214 ohm.txt", skiprows=1)
xdata = [row[0] for row in data ]#x represents current unit is "V"
ydata = [row[1] for row in data]#y represents voltage unit is "mA"
percision_error_V = np.array(xdata) * 0.0025 #we are using last digit of reading and multiply by measured voltage
accuracy_error_V = 0.01#we are using DC Vlotage, so use the error it provided online
erry = []
for i in range(len(percision_error_V)):
#to compare percision_error and accuracy_error for Voltage and use the larger one
erry.append(max(percision_error_V[i], accuracy_error_V))
def model_function (x, a, b):
return a*x + b
p0 = [0 , 0.]#214ohm is measured by ohmeter
p_opt , p_cov = curve_fit ( model_function ,
xdata , ydata , p0,
erry , True )
print(erry)
a_opt = p_opt[0]
b_opt = p_opt[1]
print(p_cov)
print("diagonal of P-cov is",np.diag(p_cov))
print("a_opt, b_opt is ",a_opt, b_opt)
xhat = np.arange(0, 16, 0.1)
plt.plot(xhat, model_function(xhat, a_opt, b_opt), 'r-', label="model function")
plt.errorbar(xdata, ydata,np.array(erry),linestyle="",marker='s', label="error bar")
plt.legend()
plt.ylabel('Current (mA)')
plt.xlabel('Voltage(V)')
plt.title("Voltage vs. Current with 220ohm Resistor")
plt.show()
p_sigma = np.sqrt(np.diag(p_cov))
print("p_sigma is" ,p_sigma)
for i in range(len(xdata)):
sum=sum((ydata[i]-model_function(xdata[i], a_opt, b_opt))
chi.append(sum)
this is the required function I'm supposed to put on python
Thanks
my code is alright until the equation of chi-square, I wonder how should I fix it?
You have an indentation, 1 missing parenthesis, and variable naming issues so far in this sample of code
FROM
for i in range(len(xdata)):
sum=sum((ydata[i]-model_function(xdata[i], a_opt, b_opt))
1.append(sum)
TO
for i in range(len(xdata)):
sum=sum((ydata[i]-model_function(xdata[i], a_opt, b_opt)) )
a.append(sum)
Variables cannot be named with numbers. E.g. 1,2,3. They must start with string - a1, alfa, betta, or s_t, _s.
I have matrices where elements can be defined as arithmetic expressions and have written Python code to optimise parameters in these expressions in order to minimize particular eigenvalues of the matrix. I have used scipy to do this, but was wondering if it is possible with NLopt as I would like to try a few more algorithms which it has (derivative free variants).
In scipy I would do something like this:
import numpy as np
from scipy.linalg import eig
from scipy.optimize import minimize
def my_func(x):
y, w = x
arr = np.array([[y+w,-2],[-2,w-2*(w+y)]])
ev, ew=eig(arr)
return ev[0]
x0 = np.array([10, 3.45]) # Initial guess
minimize(my_func, x0)
In NLopt I have tried this:
import numpy as np
from scipy.linalg import eig
import nlopt
def my_func(x,grad):
arr = np.array([[x[0]+x[1],-2],[-2,x[1]-2*(x[1]+x[0])]])
ev, ew=eig(arr)
return ev[0]
opt = nlopt.opt(nlopt.LN_BOBYQA, 2)
opt.set_lower_bounds([1.0,1.0])
opt.set_min_objective(my_func)
opt.set_xtol_rel(1e-7)
x = opt.optimize([10.0, 3.5])
minf = opt.last_optimum_value()
print "optimum at ", x[0],x[1]
print "minimum value = ", minf
print "result code = ", opt.last_optimize_result()
This returns:
ValueError: nlopt invalid argument
Is NLopt able to process this problem?
my_func should return double, posted sample return complex
print(type(ev[0]))
None
<class 'numpy.complex128'>
ev[0]
(13.607794065928395+0j)
correct version of my_func:
def my_func(x, grad):
arr = np.array([[x[0]+x[1],-2],[-2,x[1]-2*(x[1]+x[0])]])
ev, ew=eig(arr)
return ev[0].real
updated sample returns:
optimum at [ 1. 1.]
minimum value = 2.7015621187164243
result code = 4