So i have this newton optimation problem where i must found the value f'(x) and f''(x) where x = 2.5 and the f = 2 * sin(x) - ((x)**2/10) for calculating, but i tried using sympy and np.diff for the First and Second Derivative but no clue, cause it keep getting error so i go back using manual derivate, Any clue how to derivative the function f with help of other library, Here's the code
def Newton(x0):
x = x0
f = lambda x : 2 * np.sin (x) - ((x)**2/10)
f_x0 = f(x0)
#First Derivative
f1 = lambda x : 2 * np.cos (x) - ((x)/5)
f_x1 = f1(x0)
#Second Derivative
f2 = lambda x : -2 * np.sin (x) - (1/5)
f_x2 = f2(x0)
x1 = x0 - (f_x1/f_x2)
x0 = x1
return x,f_x0,f_x1,f_x2,x0
finding first and second derivative without the manual way.
In your case, the derivates can be calculated using the scipy library as follows:
from scipy.misc import derivative
def f(x):
return 2 * sin(x) - ((x)**2/10)
print("First derivative:" , derivative(f, 2.5, dx=1e-9))
print("Second derivative", derivative(f, 2.5, n=2, dx=0.02))
Here the first and second derivative is calculated for your function at x=2.5.
The same can be done with the sympy library and some may find this easier than the above method.
from sympy import *
x = Symbol('x')
y = 2 * sin(x) - ((x)**2/10) #function
yprime = y.diff(x) #first derivative function
ydoubleprime = y.diff(x,2) #second derivative function
f_first_derivative = lambdify(x, yprime)
f_second_derivative = lambdify(x, ydoubleprime)
print("First derivative:" , f_first_derivative(2.5))
print("Second derivative",f_second_derivative(2.5))
Related
I am little bit confused how to calculate the partial derivatives of sigmoid function in python. Since in general we can calculate that by using the given code:
Example: f(x,y) = x4 + x * y4
w.r.t x. would be then :
import sympy as sym
#Derivatives of multivariable function
x , y = sym.symbols('x y')
f = x4+x*y4
#Differentiating partially w.r.t x
derivative_f = f.diff(x)
derivative_f
how would the code work for partial derivatives of this then:
I tried to sub to the same function but I think I am doing something incorrectly
Is it possible to solve Cubic equation without using sympy?
Example:
import sympy as sp
xp = 30
num = xp + 4.44
sp.var('x, a, b, c, d')
Sol3 = sp.solve(0.0509 * x ** 3 + 0.0192 * x ** 2 + 3.68 * x - num, x)
The result is:
[6.07118098358257, -3.2241955998463 - 10.0524891203436*I, -3.2241955998463 + 10.0524891203436*I]
But I want to find a way to do it with numpy or without 3 part lib at all
I tried with numpy:
import numpy as np
coeff = [0.0509, 0.0192, 3.68, --4.44]
print(np.roots(coeff))
But the result is :
[ 0.40668245+8.54994773j 0.40668245-8.54994773j -1.19057511+0.j]
In your numpy method you are making two slight mistakes with the final coefficient.
In the SymPy example your last coefficient is - num, this is, according to your code: -num = - (xp + 4.44) = -(30 + 4.44) = -34.44
In your NumPy example yout last coefficient is --4.44, which is 4.44 and does not equal -34.33.
If you edit the NumPy code you will get:
import numpy as np
coeff = [0.0509, 0.0192, 3.68, -34.44]
print(np.roots(coeff))
[-3.2241956 +10.05248912j -3.2241956 -10.05248912j
6.07118098 +0.j ]
The answer are thus the same (note that NumPy uses j to indicate a complex number. SymPy used I)
You could implement the cubic formula
this Youtube video from mathologer could help understand it.
Based on that, the cubic function for ax^3 + bx^2 + cx + d = 0 can be written like this:
def cubic(a,b,c,d):
n = -b**3/27/a**3 + b*c/6/a**2 - d/2/a
s = (n**2 + (c/3/a - b**2/9/a**2)**3)**0.5
r0 = (n-s)**(1/3)+(n+s)**(1/3) - b/3/a
r1 = (n+s)**(1/3)+(n+s)**(1/3) - b/3/a
r2 = (n-s)**(1/3)+(n-s)**(1/3) - b/3/a
return (r0,r1,r2)
The simplified version of the formula only needs to get c and d as parameters (aka p and q) and can be implemented like this:
def cubic(p,q):
n = -q/2
s = (q*q/4+p**3/27)**0.5
r0 = (n-s)**(1/3)+(n+s)**(1/3)
r1 = (n+s)**(1/3)+(n+s)**(1/3)
r2 = (n-s)**(1/3)+(n-s)**(1/3)
return (r0,r1,r2)
print(cubic(-15,-126))
(5.999999999999999, 9.999999999999998, 2.0)
I'll let you mix in complex number operations to properly get all 3 roots
I cannot write the program which is solving 2nd order differential equation with respect to code I wrote for y'=y
I know that I should write a program which turn a 2nd order differential equation into two ordinary differential equations but I don!t know how can I do in Python.
P.S. : I have to use that code below. It's a homework
Please forgive my mistakes, it's my first question. Thanks in advance
from pylab import*
xd=[];y=[]
def F(x,y):
return y
def rk4(x0,y0,h,N):
xd.append(x0)
yd.append(y0)
for i in range (1,N+1) :
k1=F(x0,y0)
k2=F(x0+h/2,y0+h/2*k1)
k3=F(x0+h/2,y0+h/2*k2)
k4=F(x0+h,y0+h*k3)
k=1/6*(k1+2*k2+2*k3+k4)
y=y0+h*k
x=x0+h
yd.append(y)
xd.append(x)
y0=y
x0=x
return xd,yd
x0=0
y0=1
h=0.1
N=10
x,y=rk4(x0,y0,h,N)
print("x=",x)
print("y=",y)
plot(x,y)
show()
You can basically reformulate any scalar ODE (Ordinary Differential Equation) of order n in Cauchy form into an ODE of order 1. The only thing that you "pay" in this operation is that the second ODE's variables will be vectors instead of scalar functions.
Let me give you an example with an ODE of order 2. Suppose your ODE is: y'' = F(x,y, y'). Then you can replace it by [y, y']' = [y', F(x,y,y')], where the derivative of a vector has to be understood component-wise.
Let's take back your code and instead of using Runge-Kutta of order 4 as an approximate solution of your ODE, we will apply a simple Euler scheme.
from pylab import*
import matplotlib.pyplot as plt
# we are approximating the solution of y' = f(x,y) for x in [x_0, x_1] satisfying the Cauchy condition y(x_0) = y0
def f(x, y0):
return y0
# here f defines the equation y' = y
def explicit_euler(x0, x1, y0, N,):
# The following formula relates h and N
h = (x1 - x0)/(N+1)
xd = list()
yd = list()
xd.append(x0)
yd.append(y0)
for i in range (1,N+1) :
# We use the explicite Euler scheme y_{i+1} = y_i + h * f(x_i, y_i)
y = yd[-1] + h * f(xd[-1], yd[-1])
# you can replace the above scheme by any other (R-K 4 for example !)
x = xd[-1] + h
yd.append(y)
xd.append(x)
return xd, yd
N = 250
x1 = 5
x0 = 0
y0 = 1
# the only function which satisfies y(0) = 1 and y'=y is y(x)=exp(x).
xd, yd =explicit_euler(x0, x1, y0, N)
plt.plot(xd,yd)
plt.show()
# this plot has the right shape !
Note that you can replace the Euler scheme by R-K 4 which has better stability and convergence properties.
Now, suppose that you want to solve a second order ODE, let's say for example: y'' = -y with initial conditions y(0) = 1 and y'(0) = 0. Then you have to transform your scalar function y into a vector of size 2 as explained above and in the comments in code below.
from pylab import*
import matplotlib.pyplot as plt
import numpy as np
# we are approximating the solution of y'' = f(x,y,y') for x in [x_0, x_1] satisfying the Cauchy condition of order 2:
# y(x_0) = y0 and y'(x_0) = y1
def f(x, y_d_0, y_d_1):
return -y_d_0
# here f defines the equation y'' = -y
def explicit_euler(x0, x1, y0, y1, N,):
# The following formula relates h and N
h = (x1 - x0)/(N+1)
xd = list()
yd = list()
xd.append(x0)
# to allow group operations in R^2, we use the numpy library
yd.append(np.array([y0, y1]))
for i in range (1,N+1) :
# We use the explicite Euler scheme y_{i+1} = y_i + h * f(x_i, y_i)
# remember that now, yd is a list of vectors
# the equivalent order 1 equation is [y, y']' = [y', f(x,y,y')]
y = yd[-1] + h * np.array([yd[-1][1], f(xd[-1], yd[-1][0], yd[-1][1])]) # vector of dimension 2
print(y)
# you can replace the above scheme by any other (R-K 4 for example !)
x = xd[-1] + h # vector of dimension 1
yd.append(y)
xd.append(x)
return xd, yd
x0 = 0
x1 = 30
y0 = 1
y1 = 0
# the only function satisfying y(0) = 1, y'(0) = 0 and y'' = -y is y(x) = cos(x)
N = 5000
xd, yd =explicit_euler(x0, x1, y0, y1, N)
# I only want the first variable of yd
yd_1 = list(map(lambda y: y[0], yd))
plt.plot(xd,yd_1)
plt.show()
Consider a simple problem:
max log(x)
subject to x >= 1e-4
To solve the problem with scipy.optimize.minimize:
import numpy as np
from scipy.optimize import minimize
from math import log
def func(x):
return log(x[0])
def func_deriv(x):
return np.array([1 / x[0]])
cons = ({'type': 'ineq',
'fun' : lambda x: x[0] - 1e-4,
'jac' : lambda x: np.array([1])})
minimize(func, [1.0], jac=func_deriv, constraints=cons, method='SLSQP')
The script encounters ValueError because log(x) is evaluated with negative x. It seems that the function value is evaluated even if the constraint is not satisfied.
I understand that using bounds in minimize() could avoid the problem, but this is just a simplification of my original problem. In my original problem, the constraint x >= 1e-4 cannot be represented easily as bounds of x, but rather of the form g(x) >= C, so bounds wouldn't help.
If we only care about the function value with x > ε, it is possible to define a safe function extending the domain.
Take the log function as an example. It is possible to extend log with another cubic function, while making the bridge point ε smooth:
safe_log(x) = log(x) if x > ε else a * (x - b)**3
To calculate a and b, we have to satisfy:
log(ε) = a * (ε - b)**3
1 / ε = 3 * a * (ε - b)**2
Hence the safe_log function:
eps = 1e-3
def safe_log(x):
if x > eps:
return log(x)
logeps = log(eps)
a = 1 / (3 * eps * (3 * logeps * eps)**2)
b = eps * (1 - 3 * logeps)
return a * (x - b)**3
And it looks like this:
I am trying to find the root of a cubic equation using fsolve. This is my code:
from scipy import *
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
import numpy as np
#These are all parameters
g = 5.61
gamma = 6.45
kappa = 6.45
J = 6.45
rs = 10.0
m = 5.0*10**(-11)
wm = 2*3.14*23.4
r2 = np.linspace(0, 0.02, 1000)
deltaW = 0
A = 1j*g**2*(kappa + 1j*deltaW)*r2*r2/(m*wm**2)
B = J**2 + (1j*deltaW - gamma)*(1j*deltaW + kappa)
C = A + B
D = abs(C)*r2 - J*np.sqrt(2*kappa)*rs
def func(x):
D = abs(C)*r2 - J*np.sqrt(2*kappa)*rs
return D
x0 = fsolve(func, 0.0)
print x0
plt.plot(r2, D)
plt.show()
I can see from the plot that there is at least one r2 that makes D zero. However, the return value x0 I get from fsolve is always the guess value I set.
Can anyone tell me why this is happening and how to fix it?
You are passing to fsolve a function that isn't a function at all: it doesn't do anything with the inputs x. Yet, fsolve needs that, because it will test a series of values and each time check the return value from the function call with that test value. In your case, func(x) never changes, so fsolve stops with an error message of
The iteration is not making good progress, as measured by the improvement from the last ten iterations.
You could see that message if you would add full_output=True in the call to fsolve.
To solve it, define your function like this:
def func(x):
A = 1j*g**2*(kappa + 1j*deltaW)*x*x/(m*wm**2)
B = J**2 + (1j*deltaW - gamma)*(1j*deltaW + kappa)
C = A + B
D = abs(C)*x - J*np.sqrt(2*kappa)*rs
return D