Is fsolve good to any system of equations? - python

I don't have a lot of experience with Python but I decided to give it a try in solving the following system of equations:
x = A * exp (x+y)
y = 4 * exp (x+y)
I want to solve this system and plot x and y as a function of A.
I saw some a similar question and give fsolve a try:
`from scipy.optimize import fsolve
def f(p):
x, y = p
A = np.linspace(0,4)
eq1= x -A* np.exp(x+y)
eq2= y- 4* np.exp(x+y)
return (eq1,eq2)
x,y = fsolve(f,(0, 0))
print(x,y)
plt.plot(x,A)
plt.plot(y,A)
`
I'm getting these errors:
setting an array element with a sequence.
Result from function call is not a proper array of floats.

Pass the value of A as argument to the function and run fsolve for each value of A separately.
Following code works.
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
import numpy as np
def f(p,*args):
x, y = p
A = args[0]
return (x -A* np.exp(x+y),y- 4* np.exp(x+y))
A = np.linspace(0,4,5)
X = []
Y =[]
for a in A:
x,y = fsolve(f,(0.0, 0.0) , args=(a))
X.append(x)
Y.append(y)
print(x,y)
plt.plot(A,X)
plt.plot(A,Y)
4.458297786441408e-17 -1.3860676807976662
-1.100088440495758 -0.5021704548996653
-1.0668987418054918 -0.7236105952221454
-1.0405000943788385 -0.9052366768954621
-1.0393471472966025 -1.0393471472966027
/usr/local/lib/python3.6/dist-packages/scipy/optimize/minpack.py:163: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
/usr/local/lib/python3.6/dist-packages/scipy/optimize/minpack.py:163: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last five Jacobian evaluations.
warnings.warn(msg, RuntimeWarning)
[<matplotlib.lines.Line2D at 0x7f4a2a83a4e0>]

Related

Solving equation containing integrals with python

I'm currently trying to solve the following equation for x:
3.17e-2 - integral from x to 215 of [(10.^(8.64/x) / (480.1 - 10.^(4.32/x))^2)]dx = 0.
(sorry for writing the equation in such a crude way, I wasn't sure on how to insert latex on here)
so far I've come up with this:
import scipy as s
from scipy.integrate import odeint,quad
import numpy as np
def f(x):
fpe = 40
k = 1.26e4*fpe**2/4.2e4
return 10.**(8.64/x) / (k - 10.**(4.32/x))**2
def intf(x):
for i in x:
if 3.17e-2 - quad(lambda i:f(i),i,215) == 0.:
print(i)
xi = np.linspace(0.01, 5, 1000)
intf(xi)
However, I keep getting the following error:
OverflowError: (34, 'Result too large')
As you can imagine, this is not the result I was expecting. Do you reckon that this is only due to the result being too large or could there be something wrong with the code?
One thing you have to change quad returns a tuple (y, abserr), the result of the integral is quad(...)[0]
Also, if you compare f(x) == 0 you will only detect exact solutions, that will be impossible for this function in floating point computation. You could use abs(f(x)) < ytol, or simply use a zero finding method. I would suggest you to use fsolve
Another thing is that you have the derivative of the function, so you can pass that to the fsolve as well, putting all together you have
import numpy as np
from scipy.integrate import quad
from scipy.optimize import fsolve
def fprime(x):
fpe = 40
k = 1.26e4*fpe**2/4.2e4
return 10.**(8.64/x) / (k - 10.**(4.32/x))**2
def f(x):
try:
return np.array([f(i) for i in x])
except TypeError:
return 3.17e-2 - quad(lambda i:fprime(i),x,215)[0]
from scipy.optimize import fsolve
x0 = fsolve(f, 1, fprime=fprime)
this gives x0=2.03740802, and f(x0) = 2.35922393e-16

Solving differential equations numerically

I tried solving a very simple equation f = t**2 numerically. I coded a for-loop, so as to use f for the first time step and then use the solution of every loop through as the inital function for the next loop.
I am not sure if my approach to solve it numerically is correct and for some reason my loop only works twice (one through the if- then the else-statement) and then just gives zeros.
Any help very much appreciatet. Thanks!!!
## IMPORT PACKAGES
import numpy as np
import math
import sympy as sym
import matplotlib.pyplot as plt
## Loop to solve numerically
for i in range(1,4,1):
if i == 1:
f_old = t**2
print(f_old)
else:
f_old = sym.diff(f_old, t).evalf(subs={t: i})
f_new = f_old + dt * (-0.5 * f_old)
f_old = f_new
print(f_old)
Scipy.integrate package has a function called odeint that is used for solving differential equations
Here are some resources
Link 1
Link 2
y = odeint(model, y0, t)
model: Function name that returns derivative values at requested y and t values as dydt = model(y,t)
y0: Initial conditions of the differential states
t: Time points at which the solution should be reported. Additional internal points are often calculated to maintain accuracy of the solution but are not reported.
Example that plots the results as well :
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# function that returns dy/dt
def model(y,t):
k = 0.3
dydt = -k * y
return dydt
# initial condition
y0 = 5
# time points
t = np.linspace(0,20)
# solve ODE
y = odeint(model,y0,t)
# plot results
plt.plot(t,y)
plt.xlabel('time')
plt.ylabel('y(t)')
plt.show()

Python - Using a Kronecker Delta with ODEINT

I'm trying to plot the output from an ODE using a Kronecker delta function which should only become 'active' at a specific time = t1.
This should give a sawtooth like response where the initial value decays down exponentially until t=t1 where it rises again instantly before decaying down once again.
However, when I plot this it looks like the solver is seeing the Kronecker delta function as zero for all time t. Is there anyway to do this in Python?
from scipy import KroneckerDelta
import scipy.integrate as sp
import matplotlib.pyplot as plt
import numpy as np
def dy_dt(y,t):
dy_dt = 500*KroneckerDelta(t,t1) - 2y
return dy_dt
t1 = 4
y0 = 500
t = np.arrange(0,10,0.1)
y = sp.odeint(dy_dt,y0,t)
plt.plot(t,y)
In the case of a simple Kronecker delta using time, you can run the ode in pieces like so:
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import numpy as np
def dy_dt(y,t):
return -2*y
t_delta = 4
tend = 10
y0 = [500]
t1 = np.linspace(0,t_delta,50)
y1 = odeint(dy_dt,y0,t1)
y0 = y1[-1] + 500 # execute Kronecker delta
t2 = np.linspace(t_delta,tend,50)
y2 = odeint(dy_dt,y0,t2)
t = np.append(t1, t2)
y = np.append(y1, y2)
plt.plot(t,y)
Another option for complicated situations is to the events functionality of solve_ivp.
I think the problem could be internal rounding errors, because 0.1 cannot be represented exactly as a python float. I would try
import math
def dy_dt(y,t):
if math.isclose(t, t1):
return 500 - 2*y
else:
return -2y
Also the documentation of odeint suggests using the args parameter instead of global variables to give your derivative function access to additional arguments and replacing np.arange by np.linspace:
import scipy.integrate as sp
import matplotlib.pyplot as plt
import numpy as np
import math
def dy_dt(y, t, t1):
if math.isclose(t, t1):
return 500 - 2*y
else:
return -2*y
t1 = 4
y0 = 500
t = np.linspace(0, 10, num=101)
y = sp.odeint(dy_dt, y0, t, args=(t1,))
plt.plot(t, y)
I did not test the code so tell me if there is anything wrong with it.
EDIT:
When testing my code I took a look at the t values for which dy_dt is evaluated. I noticed that odeint does not only use the t values that where specified, but alters them slightly:
...
3.6636447422787928
3.743098503914526
3.822552265550259
3.902006027185992
3.991829287543431
4.08165254790087
4.171475808258308
...
Now using my method, we get
math.isclose(3.991829287543431, 4) # False
because the default tolerance is set to a relative error of at most 10^(-9), so the odeint function "misses" the bump of the derivative at 4. Luckily, we can fix that by specifying a higher error threshold:
def dy_dt(y, t, t1):
if math.isclose(t, t1, abs_tol=0.01):
return 500 - 2*y
else:
return -2*y
Now dy_dt is very high for all values between 3.99 and 4.01. It is possible to make this range smaller if the num argument of linspace is increased.
TL;DR
Your problem is not a problem of python but a problem of numerically solving an differential equation: You need to alter your derivative for an interval of sufficient length, otherwise the solver will likely miss the interesting spot. A kronecker delta does not work with numeric approaches to solving ODEs.

Marginal density function with respect to X and Y in python

I have a joint density function in two variables x and y and I need to calculate marginal density function in X and Y using quad in python for function f(X, Y) = y*e**(-y(x+1))
from scipy.integrate import dblquad import numpy as np import math
def f(x,y):
return y*math.exp(-y(x+1)) # Joint Density Function
ans,err = dblquad(f,0,math.inf, lambda x: 0 , lambda x:math.inf)
ans
I am trying the above code in Jupyter notebook but for marginal density function, we need only limit for the integral of x and y the above code is throwing an error.
Maybe this will help you out
from sympy.abc import x,y
from sympy import integrate
fxy = y*e**((-y*x-y))
fy = integrate(fxy,(x,0,ifty))
fx = integrate(fxy,(y,0,ifty))
fy
fx
There is a typo error in your Joint Density Function f function. You missed one * for the product of -y and (x+1) in math.exp function. Fixing that typo error, your program should work.
def f(x, y):
return y*math.exp(-y*(x+1))

Result from function call is not a proper array of floats using scipy.fsolve

I am trying to solve this simple simultaneous equations using scipy's fsolve function:
x + 2 = 10 &
x^2 = 64.
I am expecting 8 as the solution. However I'm getting an error saying "minpack.error: Result from function call is not a proper array of floats."
I am pretty new to python scientific library. Can someone please explain how to solve this error? Thanks!
from scipy.optimize import fsolve
def equations(p):
x = p
return (x-8, x**2 - 64)
x = fsolve(equations, 1)
print(x)
When you look at how fsolve is defined in the scipy module we see:
def fsolve(func, x0, args=(), fprime=None, full_output=0,
col_deriv=0, xtol=1.49012e-8, maxfev=0, band=None,
epsfcn=None, factor=100, diag=None):
"""
Find the roots of a function.
Return the roots of the (non-linear) equations defined by
``func(x) = 0`` given a starting estimate.
Parameters
----------
func : callable ``f(x, *args)``
A function that takes at least one (possibly vector) argument,
and returns a value of the same length.
'''
So your input value for p should consist of just as many elements as are returned by your function. Try for example:
from scipy.optimize import fsolve
import numpy as np
def equations(p):
x1 = p[0]
x2 = p[1]
return x1-8, x2**2 - 64
x = fsolve(equations, np.array([1, 2]))
print(x)
which gives 8, 8 as an answer.

Categories