My task is to do first an integration and second a trapezoid integration with Python of f(x)=x^2
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-10,10)
y = x**2
l=plt.plot(x,y)
plt.show(l)
Now I want to integrate this function to get this: F(x)=(1/3)x^3 with the picture:
This should be the output in the end:
Could someone explain me how to get the antiderivative F(x) of f(x)=x^2 with python?
I want to do this with a normal integration and a trapeze integration. For trapezoidal integration from (-10 to 10) and a step size of 0.01 (width of the trapezoids). In the end I want to get the function F(x)=(1/3)x^3 in both cases. How can I reach this?
Thanks for helping me.
There are two key observations:
the trapezoidal rule refers to numeric integration, whose output is not an integral function but a number
integration is up to an arbitrary constant which is not included in your definition of F(x)
With this in mind, you can use scipy.integrate.trapz() to define an integral function:
import numpy as np
from scipy.integrate import trapz
def numeric_integral(x, f, c=0):
return np.array([sp.integrate.trapz(f(x[:i]), x[:i]) for i in range(len(x))]) + c
or, more efficiently, using scipy.integrate.cumtrapz() (which does the computation from above):
import numpy as np
from scipy.integrate import cumtrapz
def numeric_integral(x, f, c=0):
return cumtrapz(f(x), x, initial=c)
This plots as below:
import matplotlib.pyplot as plt
def func(x):
return x ** 2
x = np.arange(-10, 10, 0.01)
y = func(x)
Y = numeric_integral(x, func)
plt.plot(x, y, label='f(x) = x²')
plt.plot(x, Y, label='F(x) = x³/3 + c')
plt.plot(x, x ** 3 / 3, label='F(x) = x³/3')
plt.legend()
which provides you the desidered result except for the arbitrary constant, which you should specify yourself.
For good measure, while not relevant in this case, note that np.arange() does not provide stable results if used with a fractional step. Typically, one would use np.linspace() instead.
The cumtrapz function from scipy will provide an antiderivative using trapezoid integration:
from scipy.integrate import cumtrapz
yy = cumtrapz(y, x, initial=0)
# make yy==0 around x==0 (optional)
i_x0 = np.where(x >= 0)[0][0]
yy -= yy[i_x0]
Trapezoid integration
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-10, 10, 0.1)
f = x**2
F = [-333.35]
for i in range(1, len(x) - 1):
F.append((f[i] + f[i - 1])*(x[i] - x[i - 1])/2 + F[i - 1])
F = np.array(F)
fig, ax = plt.subplots()
ax.plot(x, f)
ax.plot(x[1:], F)
plt.show()
Here I have applied the theoretical formula (f[i] + f[i - 1])*(x[i] - x[i - 1])/2 + F[i - 1], while the integration is done in the block:
F = [-333.35]
for i in range(1, len(x) - 1):
F.append((f[i] + f[i - 1])*(x[i] - x[i - 1])/2 + F[i - 1])
F = np.array(F)
Note that, in order to plot x and F, they must have the same number of element; so I ignore the first element of x, so they both have 199 element. This is a result of the trapezoid method: if you integrate an array f of n elements, you obtain an array F of n-1 elements. Moreover, I set the initial value of F to -333.35 at x = -10, this is the arbitrary constant from the integration process, I decided that value in order to pass the function near the origin.
Analytical integration
import sympy as sy
import numpy as np
import matplotlib.pyplot as plt
x = sy.symbols('x')
f = x**2
F = sy.integrate(f, x)
xv = np.arange(-10, 10, 0.1)
fv = sy.lambdify(x, f)(xv)
Fv = sy.lambdify(x, F)(xv)
fig, ax = plt.subplots()
ax.plot(xv, fv)
ax.plot(xv, Fv)
plt.show()
Here I use the symbolic math through sympy module. The integration is done in the block:
F = sy.integrate(f, x)
Note that, in this case, F and x have already the same number of elements. Moreover, the code is simpler.
Related
I am using interp1d from Scipy to interpolate a function with linear interpolation. Now I need to upgrade to Whittaker–Shannon interpolation. Is this already implemented somewhere? I am surprised it is not among the options of interp1d as it is a very common interpolation algorithm.
I am not familiar with sinc interpolation, but based on What's wrong with this Whittaker-Shannon-Kotel’nikov interpolation implementation? I roughly follow the same pattern
The idea is we need to resample the original data with less frequency than the original (which is represented by freq_s_ratio), then reconstruct the signal using sinc, and finally resample back to the original size
However, this caused boundary artifacts, but padding and truncate signal seems to be working. Here is my code
import numpy as np
import scipy.signal
import matplotlib.pyplot as plt
def rough_sinc_interp(samples, freq_s_ratio = 0.5):
offset_amount = int(len(samples)/2)
padded_samples = np.concatenate([ offset_amount*[samples[0]], samples, offset_amount*[samples[-1]]])
f_s = int(freq_s_ratio * len(padded_samples))
resamples = scipy.signal.resample(padded_samples, f_s)
T_s = 1/f_s
t = np.arange(0, 1, T_s)
y = np.zeros(len(t))
for k in range(1, len(resamples)):
y = y + resamples[k] * np.sinc((t - k*T_s)/T_s)
return scipy.signal.resample(y, len(padded_samples))[offset_amount:-offset_amount]
np.random.seed(1337)
signal_fn = lambda x: -1*(np.sin(x) + np.cos(x**2) + np.random.normal(scale=0.5, size=len(x)) + np.log(np.abs(x**2) + 0.1)) + 50
x = np.arange(0, 10, 0.05)
y = signal_fn(x)
plt.figure(figsize=(15, 7))
plt.plot(x, y, label="noisy")
plt.plot(x, rough_sinc_interp(y, freq_s_ratio=0.5), label="smooth - 50%")
plt.plot(x, rough_sinc_interp(y, freq_s_ratio=0.15), label="smooth - 15%")
plt.plot(x, rough_sinc_interp(y, freq_s_ratio=0.1), label="smooth - 10%")
plt.legend(loc="best")
plt.show()
I want to ask something that provably is extremly easy but I didn't find how to do it... The point is that I want to define some function in python in a symbolic way using sympy in order to make its derivative and then use this expresion numerically.
Here an example is showed:
import numpy as np
from sympy import *
z = Symbol('z')
function = z*exp(z**2)
deriv = diff(function, z)
x = np.arange(1, 3, 0.1) #interval of points
#How can I evaluate numerically this array "x" with the function deriv???
Do you know how to do it? Thanks!
You can use lambdify with the numpy backend:
import numpy as np
from sympy import *
z = Symbol('z')
function = z*exp(z**2)
deriv = diff(function, z)
x = np.arange(1, 3, 0.1) #interval of points
d = lambdify(z, deriv, "numpy")
d(x)
# array([ 8.15484549e+00, 1.14689175e+01, 1.63762998e+01,
# 2.37373255e+01, 3.49286892e+01, 5.21825471e+01,
# 7.91672020e+01, 1.21994639e+02, 1.90992239e+02,
# 3.03860954e+02, 4.91383350e+02, 8.07886132e+02,
# 1.35069268e+03, 2.29681687e+03, 3.97320108e+03,
# 6.99317313e+03, 1.25255647e+04, 2.28335915e+04,
# 4.23706166e+04, 8.00431723e+04])
I have written this code to model the motion of a spring pendulum
import numpy as np
from scipy.integrate import odeint
from numpy import sin, cos, pi, array
import matplotlib.pyplot as plt
def deriv(z, t):
x, y, dxdt, dydt = z
dx2dt2=(0.415+x)*(dydt)**2-50/1.006*x+9.81*cos(y)
dy2dt2=(-9.81*1.006*sin(y)-2*(dxdt)*(dydt))/(0.415+x)
return np.array([x,y, dx2dt2, dy2dt2])
init = array([0,pi/18,0,0])
time = np.linspace(0.0,10.0,1000)
sol = odeint(deriv,init,time)
def plot(h,t):
n,u,x,y=h
n=(0.4+x)*sin(y)
u=(0.4+x)*cos(y)
return np.array([n,u,x,y])
init2 = array([0.069459271,0.393923101,0,pi/18])
time2 = np.linspace(0.0,10.0,1000)
sol2 = odeint(plot,init2,time2)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(sol2[:,0], sol2[:, 1], label = 'hi')
plt.legend()
plt.show()
where x and y are two variables, and I'm trying to convert x and y to the polar coordinates n (x-axis) and u (y-axis) and then graph n and u on a graph where n is on the x-axis and u is on the y-axis. However, when I graph the code above it gives me:
Instead, I should be getting an image somewhat similar to this:
The first part of the code - from "def deriv(z,t): to sol:odeint(deriv..." is where the values of x and y are generated, and using that I can then turn them into rectangular coordinates and graph them. How do I change my code to do this? I'm new to Python, so I might not understand some of the terminology. Thank you!
The first solution should give you the expected result, but there is a mistake in the implementation of the ode.
The function you pass to odeint should return an array containing the solutions of a 1st-order differential equations system.
In your case what you are solving is
While instead you should be solving
In order to do so change your code to this
import numpy as np
from scipy.integrate import odeint
from numpy import sin, cos, pi, array
import matplotlib.pyplot as plt
def deriv(z, t):
x, y, dxdt, dydt = z
dx2dt2 = (0.415 + x) * (dydt)**2 - 50 / 1.006 * x + 9.81 * cos(y)
dy2dt2 = (-9.81 * 1.006 * sin(y) - 2 * (dxdt) * (dydt)) / (0.415 + x)
return np.array([dxdt, dydt, dx2dt2, dy2dt2])
init = array([0, pi / 18, 0, 0])
time = np.linspace(0.0, 10.0, 1000)
sol = odeint(deriv, init, time)
plt.plot(sol[:, 0], sol[:, 1], label='hi')
plt.show()
The second part of the code looks like you are trying to do a change of coordinate.
I'm not sure why you try to solve the ode again instead of just doing this.
x = sol[:,0]
y = sol[:,1]
def plot(h):
x, y = h
n = (0.4 + x) * sin(y)
u = (0.4 + x) * cos(y)
return np.array([n, u])
n,u = plot( (x,y))
As of now, what you are doing there is solving this system:
Which leads to x=e^t and y=e^t and n' = (0.4 + e^t) * sin(e^t) u' = (0.4 + e^t) * cos(e^t).
Without going too much into the details, with some intuition you could see that this will lead to an attractor as the derivative of n and u will start to switch sign faster and with greater magnitude at an exponential rate, leading to n and u collapsing onto an attractor as shown by your plot.
If you are actually trying to solve another differential equation I would need to see it in order to help you further
This is what happen if you do the transformation and set the time to 1000:
I need to compute the quantity
1/tanh(x) - 1/x
for x > 0, where x can be both very small and very large.
Asymptotically for small x, we have
1/tanh(x) - 1/x -> x / 3
and for large x
1/tanh(x) - 1/x -> 1
Anyhow, when computing the expression, already from 10^-7 and smaller round-off errors lead to the expression being evaluated as exactly 0:
import numpy
import matplotlib.pyplot as plt
x = numpy.array([2**k for k in range(-30, 30)])
y = 1.0 / numpy.tanh(x) - 1.0 / x
plt.loglog(x, y)
plt.show()
For very small x, one could use the Taylor expansion of 1/tanh(x) - 1/x around 0,
y = x/3.0 - x**3 / 45.0 + 2.0/945.0 * x**5
The error is of the order O(x**7), so if 10^-5 is chosen as the breaking point, relative and absolute error will be well below machine precision.
import numpy
import matplotlib.pyplot as plt
x = numpy.array([2**k for k in range(-50, 30)])
y0 = 1.0 / numpy.tanh(x) - 1.0 / x
y1 = x/3.0 - x**3 / 45.0 + 2.0/945.0 * x**5
y = numpy.where(x > 1.0e-5, y0, y1)
plt.loglog(x, y)
plt.show()
Use the python package mpmath for arbitrary decimal precision. For example:
import mpmath
from mpmath import mpf
mpmath.mp.dps = 100 # set decimal precision
x = mpf('1e-20')
print (mpf('1') / mpmath.tanh(x)) - (mpf('1') / x)
>>> 0.000000000000000000003333333333333333333333333333333333333333311111111111111111111946629156220629025294373160489201095913
It gets extremely precise.
Look into mpmath plotting. mpmath plays well with matplotlib, which you are using, so this should solve your problem.
Here is an example of how to integrate mpmath into the code you wrote above:
import numpy
import matplotlib.pyplot as plt
import mpmath
from mpmath import mpf
mpmath.mp.dps = 100 # set decimal precision
x = numpy.array([mpf('2')**k for k in range(-30, 30)])
y = mpf('1.0') / numpy.array([mpmath.tanh(e) for e in x]) - mpf('1.0') / x
plt.loglog(x, y)
plt.show()
A probably simpler solution to overcome this is changing the data type under which numpy is operating:
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-30, 30, dtype=np.longdouble)
x = 2**x
y = 1.0 / np.tanh(x) - 1.0 / x
plt.loglog(x, y)
plt.show()
Using longdouble as data type does give the proper solution without rounding errors.
I did sightly modify your example, in your case the only thing you need to modify is:
x = numpy.array([2**k for k in range(-30, 30)])
to:
x = numpy.array([2**k for k in range(-30, 30)], dtype=numpy.longdouble)
i'm trying to write function called plotting which takes i/p parameters Z, p and q and plots the function
f(y) = det(Z − yI) on the interval [p, q]
(Note: I is the identity matrix.) det() is the determinant.
For finding det(), numpy.linalg.det() can be used
and for indentity matrix , np.matlib.identity(n)
Is there a way to write such functions in python? and plot them?
import numpy as np
def f(y):
I2 = np.matlib.identity(y)
x = Z-yI2
numpy.linalg.det(x)
....
Is what i am tryin correct? any alternative?
You could use the following implementation.
import numpy as np
import matplotlib.pyplot as plt
def f(y, Z):
n, m = Z.shape
assert(n==m)
I = np.identity(n)
x = Z-y*I
return np.linalg.det(x)
Z = np.matrix('1 2; 3 4')
p = -15
q = 15
y = np.linspace(p, q)
w = np.zeros(y.shape)
for i in range(len(y)):
w[i] = f(y[i], Z)
plt.plot(y, w)
plt.show()