How to resolve integration function not integrating correctly? - python

I am trying to build a few simple operations such as a derivative and integral function to operate on lambda functions because sympy and scipy were struggling to integrate some things that I was passing to them.
The derivative function does not give me any issues and looks to return the derivative of the input function when plotted, but the integral function does not return the same, and does not plot the correct integral of the input.
import matplotlib.pyplot as plt
import numpy as np
from phys_func import func
sr = [-10,10]
x = np.linspace(sr[0],sr[1], 100)
F = lambda x: x**2
f = func.I(F,x)
plt.plot(x,F(x), label = 'F(x)')
plt.plot(x,f(x), label = 'f(x)')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
The integration function that does not work:
def I(F,x):
dx = (x[len(x)-1] - x[0])/len(x)
return lambda x : 0.5*( F(x+dx) + F(x) )*dx
The derivative function that works:
def d(f,x):
dx = (x[len(x)-1]-x[0])/len(x)
return lambda x: (f(x+dx)-f(x))/dx
Can anyone lend me a hand please?

You cannot find the anti derivative of a function numerically with out knowing the value of the anti derivative at a single point. Suppose if you fix the value of the antiderivative function value at x =a to be 0 (and given function is continuous from [a,x]) , then we can use definite integrals. For this function, let us take a=0 (i.e 0 is a root of anti derivative function), so you can do a definite integral from [0,x]. Also, your integration function is wrong. You need to sum all the 0.5*( F(x+dx) + F(x) )*dx elements from 0 to x to get the definite integral.
You can modify I(f,x) as follows
def I(F1): # N is number of intervals
return lambda x, N: np.sum( 0.5*( F1(np.linspace(0,x,num=N)+ (x)/N ) + F1(np.linspace(0,x,num=N)))*(x)/N)
In [1]: import numpy as np
In [2]: def I(F1): # N is number of intervals
...: return lambda x, N: np.sum( 0.5*( F1(np.linspace(0,x,num=N)+ (x)/N
...: ) + F1(np.linspace(0,x,num=N)))*(x)/N)
...:
In [3]: F = lambda x: x**2
In [4]: x_ran = np.linspace(-10,10, 100)
In [5]: y = I(F)
In [6]: y_ran = []
In [7]: for i in x_ran:
...: y_ran.append(y(i,100))
In [8]: plt.plot(x_ran,y_ran)
In [9]: plt.show()

Related

Solution of differential equation on specific point python

I have a differential equation:
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# function that returns dy/dt
def model(y,t):
k = 0.3
dydt = -k * y
return dydt
# initial condition
y0 = 5
# time points
t = np.linspace(0,10)
t1=2
# solve ODE
y = odeint(model,y0,t)
And I want to evaluate the solution of this differential equation on two different points. For example I want y(t=2) and y(t=3).
I can solve the problem in the following way:
Suppose that you need y(2). Then you, define
t = np.linspace(0,2)
and just print
print y[-1]
In order to get the value of y(2). However I think that this procedure is slow, since I need to do the same again in order to calculate y(3), and if I want another point I need to do same again. So there is some faster way to do this?
isn't this just:
y = odeint(model, y0, [0, 2, 3])[1:]
i.e. the third parameter just specifies the values of t that you want back.
as an example of printing the results out, we'd just follow the above with:
print(f'y(2) = {y[0,0]}')
print(f'y(3) = {y[1,0]}')
which gives me:
y(2) = 2.7440582441900494
y(3) = 2.032848408317066
which seems the same as the anytical solution:
5 * np.exp(-0.3 * np.array([2,3]))
You can get exactly what you want if you use solve_ivp with the dense-output option
from scipy.integrate import solve_ivp
# function that returns dy/dt
def model(t,y):
k = 0.3
dydt = -k * y
return dydt
# initial condition
y0 = [5]
# solve ODE
res = solve_ivp(model,[0,10],y0,dense_output=True)
y = lambda t: res.sol(t)[0]
for t in [2,3,3.4]:
print(f'y({t}) = {y(t)}')
with the output
y(2) = 2.743316182689662
y(3) = 2.0315223673200338
y(3.4) = 1.802238620366918

Having trouble with using sympy subs command when trying to solve a function at an x value of 0

I have a function [ -4*x/sqrt(1 - (1 - 2*x^2)^2) + 2/sqrt(1 - x^2) ] that I need to evaluate at x=0. However, whenever you graph this function, for some interval of y there are many y-values at x=0. This leads me to think that the (subs) command can only return one y-value. Any help or elaboration on this? Thank you!
Here's my code if it might help:
x = symbols('x')
f = 2*asin(x) # f(x) function
g = acos(1-2*x**2) # g(x) function
eq = diff(f-g) # evaluating the derivative of f(x) - g(x)
eq.subs(x, 0) # substituting 0 for x in the derivative of f(x) - g(x)
After I run the code, it returns NaN, which I assume is because substituting in 0 for x returns not a single number, but a range of numbers.
Here is the graph of the function to be evaluated at x=0:
You should always give SymPy as many assumptions as possible. For example, it can't pull an x**2 out of a sqrt because it thinks x is complex.
A factorization an then a simplification solves the problem. SymPy can't do L'Hopital on eq = A + B since it does not know that both A and B converge. So you have to guide it a little by bringing the fractions together and then simplifying:
from sympy import *
x = symbols('x', real=True)
f = 2*asin(x) # f(x) function
g = acos(1-2*x**2) # g(x) function
eq = diff(f-g) # evaluating the derivative of f(x) - g(x)
eq = simplify(factor(eq))
print(eq)
print(limit(eq, x, 0, "+"))
print(limit(eq, x, 0, "-"))
Outputs:
(-2*x + 2*Abs(x))/(sqrt(1 - x**2)*Abs(x))
0
4
simplify, factor and expand do wonders.

Marginal density function with respect to X and Y in python

I have a joint density function in two variables x and y and I need to calculate marginal density function in X and Y using quad in python for function f(X, Y) = y*e**(-y(x+1))
from scipy.integrate import dblquad import numpy as np import math
def f(x,y):
return y*math.exp(-y(x+1)) # Joint Density Function
ans,err = dblquad(f,0,math.inf, lambda x: 0 , lambda x:math.inf)
ans
I am trying the above code in Jupyter notebook but for marginal density function, we need only limit for the integral of x and y the above code is throwing an error.
Maybe this will help you out
from sympy.abc import x,y
from sympy import integrate
fxy = y*e**((-y*x-y))
fy = integrate(fxy,(x,0,ifty))
fx = integrate(fxy,(y,0,ifty))
fy
fx
There is a typo error in your Joint Density Function f function. You missed one * for the product of -y and (x+1) in math.exp function. Fixing that typo error, your program should work.
def f(x, y):
return y*math.exp(-y*(x+1))

Why does sympy.diff not differentiate sympy polynomials as expected?

I am trying to figure out why sympy.diff does not differentiate sympy polynomials as expected. Normally, sympy.diff works just fine if a symbolic variable is defined and the polynomial is NOT defined using sympy.Poly. However, if the function is defined using sympy.Poly, sympy.diff does not seem to actually compute the derivative. Below is a code sample that shows what I mean:
import sympy as sy
# define symbolic variables
x = sy.Symbol('x')
y = sy.Symbol('y')
# define function WITHOUT using sy.Poly
f1 = x + 1
# define function WITH using sy.Poly
f2 = sy.Poly(x + 1, x, domain='QQ')
# compute derivatives and return results
df1 = sy.diff(f1,x)
df2 = sy.diff(f2,x)
print('f1: ',f1)
print('f2: ',f2)
print('df1: ',df1)
print('df2: ',df2)
This prints the following results:
f1: x + 1
f2: Poly(x + 1, x, domain='QQ')
df1: 1
df2: Derivative(Poly(x + 1, x, domain='QQ'), x)
Why does sympy.diff not know how to differentiate the sympy.Poly version of the polynomial? Is there a way to differentiate the sympy polynomial, or a way to convert the sympy polynomial to the form that allows it to be differentiated?
Note: I tried with different domains (i.e., domain='RR' instead of domain='QQ'), and the output does not change.
This appears to be a bug. You can get around it by calling diff directly on the Poly instance. Ideally calling the function diff from the top level sympy module should yield the same result as calling the method diff.
In [1]: from sympy import *
In [2]: from sympy.abc import x
In [3]: p = Poly(x+1, x, domain='QQ')
In [4]: p.diff(x)
Out[4]: Poly(1, x, domain='QQ')
In [5]: diff(p, x)
Out[5]: Derivative(Poly(x + 1, x, domain='QQ'), x)
In [6]: diff(p, x).doit()
Out[6]: Derivative(Poly(x + 1, x, domain='ZZ'), x)

Need help to compute derivative and integral using Numpy

I need help to compute derivative and integral of function using finite difference method and Numpy without using loops.
The whole task: Tabulate Gauss function f(x) = (1./(sqrt(2.*pi)*s))*e**(-0.5*((x-m)/s)**2) on the interval [-10,10] for m = 0 and s=[0.5,5]. Compute derivative and integral of the function using finite difference method without using loops. Create plots of the function and its derivative. Use Numpy and Matplotlib.
Here's the beginning of the programm:
def f(x,s,m):
return (1./(sqrt(2.*pi)*s))*e**(-0.5*((x-m)/s)**2)
def main():
m = 0
s = np.linspace(0.5,5,3)
x = np.linspace(-10,10,20)
for i in range(3):
print('s = ', s[i])
for j in range(20):
f(x[j],s[i],m)
print('x = ',x[j],', y = ',f(x[j],s[i],m))
By using the numpy arrays you can apply that operation directly with algebraic notation:
result = (1./(np.sqrt(2.*np.pi)*s))*np.exp(-0.5*((x-m)/s)**2)
The simplest way (without using SciPy) seems to me to directly sum for the integral and central difference method for the derivative:
import numpy as np
import pylab
def gaussian(x, s, m):
return 1./(np.sqrt(2.*np.pi)*s) * np.exp(-0.5*((x-m)/s)**2)
m = 0
s = np.linspace(0.5,5,3)
x, dx = np.linspace(-10,10,1000, retstep=True)
x = x[:,np.newaxis]
y = gaussian(x,s,m)
h = 1.e-6
dydx = (gaussian(x+h, s, m) - gaussian(x-h, s, m))/2/h
int_y = np.sum(gaussian(x, s, m), axis=0) * dx
print(int_y)
pylab.plot(x, y)
pylab.plot(x, dydx)
pylab.show()

Categories