Plotting mathematical function in python - python

i'm trying to write function called plotting which takes i/p parameters Z, p and q and plots the function
f(y) = det(Z − yI) on the interval [p, q]
(Note: I is the identity matrix.) det() is the determinant.
For finding det(), numpy.linalg.det() can be used
and for indentity matrix , np.matlib.identity(n)
Is there a way to write such functions in python? and plot them?
import numpy as np
def f(y):
I2 = np.matlib.identity(y)
x = Z-yI2
numpy.linalg.det(x)
....
Is what i am tryin correct? any alternative?

You could use the following implementation.
import numpy as np
import matplotlib.pyplot as plt
def f(y, Z):
n, m = Z.shape
assert(n==m)
I = np.identity(n)
x = Z-y*I
return np.linalg.det(x)
Z = np.matrix('1 2; 3 4')
p = -15
q = 15
y = np.linspace(p, q)
w = np.zeros(y.shape)
for i in range(len(y)):
w[i] = f(y[i], Z)
plt.plot(y, w)
plt.show()

Related

How to do a (trapeze) integration in Python with x^2?

My task is to do first an integration and second a trapezoid integration with Python of f(x)=x^2
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-10,10)
y = x**2
l=plt.plot(x,y)
plt.show(l)
Now I want to integrate this function to get this: F(x)=(1/3)x^3 with the picture:
This should be the output in the end:
Could someone explain me how to get the antiderivative F(x) of f(x)=x^2 with python?
I want to do this with a normal integration and a trapeze integration. For trapezoidal integration from (-10 to 10) and a step size of 0.01 (width of the trapezoids). In the end I want to get the function F(x)=(1/3)x^3 in both cases. How can I reach this?
Thanks for helping me.
There are two key observations:
the trapezoidal rule refers to numeric integration, whose output is not an integral function but a number
integration is up to an arbitrary constant which is not included in your definition of F(x)
With this in mind, you can use scipy.integrate.trapz() to define an integral function:
import numpy as np
from scipy.integrate import trapz
def numeric_integral(x, f, c=0):
return np.array([sp.integrate.trapz(f(x[:i]), x[:i]) for i in range(len(x))]) + c
or, more efficiently, using scipy.integrate.cumtrapz() (which does the computation from above):
import numpy as np
from scipy.integrate import cumtrapz
def numeric_integral(x, f, c=0):
return cumtrapz(f(x), x, initial=c)
This plots as below:
import matplotlib.pyplot as plt
def func(x):
return x ** 2
x = np.arange(-10, 10, 0.01)
y = func(x)
Y = numeric_integral(x, func)
plt.plot(x, y, label='f(x) = x²')
plt.plot(x, Y, label='F(x) = x³/3 + c')
plt.plot(x, x ** 3 / 3, label='F(x) = x³/3')
plt.legend()
which provides you the desidered result except for the arbitrary constant, which you should specify yourself.
For good measure, while not relevant in this case, note that np.arange() does not provide stable results if used with a fractional step. Typically, one would use np.linspace() instead.
The cumtrapz function from scipy will provide an antiderivative using trapezoid integration:
from scipy.integrate import cumtrapz
yy = cumtrapz(y, x, initial=0)
# make yy==0 around x==0 (optional)
i_x0 = np.where(x >= 0)[0][0]
yy -= yy[i_x0]
Trapezoid integration
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-10, 10, 0.1)
f = x**2
F = [-333.35]
for i in range(1, len(x) - 1):
F.append((f[i] + f[i - 1])*(x[i] - x[i - 1])/2 + F[i - 1])
F = np.array(F)
fig, ax = plt.subplots()
ax.plot(x, f)
ax.plot(x[1:], F)
plt.show()
Here I have applied the theoretical formula (f[i] + f[i - 1])*(x[i] - x[i - 1])/2 + F[i - 1], while the integration is done in the block:
F = [-333.35]
for i in range(1, len(x) - 1):
F.append((f[i] + f[i - 1])*(x[i] - x[i - 1])/2 + F[i - 1])
F = np.array(F)
Note that, in order to plot x and F, they must have the same number of element; so I ignore the first element of x, so they both have 199 element. This is a result of the trapezoid method: if you integrate an array f of n elements, you obtain an array F of n-1 elements. Moreover, I set the initial value of F to -333.35 at x = -10, this is the arbitrary constant from the integration process, I decided that value in order to pass the function near the origin.
Analytical integration
import sympy as sy
import numpy as np
import matplotlib.pyplot as plt
x = sy.symbols('x')
f = x**2
F = sy.integrate(f, x)
xv = np.arange(-10, 10, 0.1)
fv = sy.lambdify(x, f)(xv)
Fv = sy.lambdify(x, F)(xv)
fig, ax = plt.subplots()
ax.plot(xv, fv)
ax.plot(xv, Fv)
plt.show()
Here I use the symbolic math through sympy module. The integration is done in the block:
F = sy.integrate(f, x)
Note that, in this case, F and x have already the same number of elements. Moreover, the code is simpler.

How can I make the contours of quadratic form using matplotlib?

I have the following quadratic form f(x) = x^T A x - b^T x and i've used numpy to define my matrices A, b:
A = np.array([[4,3], [3,7]])
b = np.array([3,-7])
So we're talking about 2 dimensions here, meaning that the contour plot will have the axes x1 and x2 and I want these to span from -4 to 4.
I've tried to experiment by doing
u = np.linspace(-4,4,100)
x, y = np.meshgrid(u,u)
in order to create the 2 axis x1 and x2 but then I dont know how to define my function f(x) and if I do plt.contour(x,y,f) it won't work because the function f(x) is defined with only x as an argument.
Any ideas would be greatly appreciated. Thanks !
EDIT : I managed to "solve" the problem by doing the operations between the quadratic form , for example x^T A x, and ended up with a function of x1,x2 where these are the components of x vector. After that I did
u = np.linspace(-4,4,100)
x, y = np.meshgrid(u,u)
z = 1.5*(x**2) + 3*(y**2) - 2*x + 8*y + 2*x*y #(thats the function i ended up with)
plt.contour(x, y, z)
If Your transformation matrices A, b look like
A = np.array([[4,3], [3,7]])
b = np.array([3,-7])
and Your data look like
u = np.linspace(-4,4,100)
x, y = np.meshgrid(u,u)
x.shape
x and y will have the shapes (100,100).
You can define f(x) as
def f(x):
return np.dot(np.dot(x.T,A),x) - np.dot(b,x)
to then input anything with the shape (2, N) into the function f.
I am unfortunately not sure, which values You want to feed into it.
But one example would be: [(-4:4), (-4:4)]
plt.contour(x, y, f(x[0:2,:]))
update
If the visualization of the contour plot does not fit Your purpose, You can use other plots, e.g. 3D visualizations.
from mpl_toolkits.mplot3d import Axes3D # This import has side effects required for the kwarg projection='3d' in the call to fig.add_subplot
fig = plt.figure(figsize=(40,20))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x,y, f(x[0:2,:]))
plt.show()
If You expect other values in the z-dimension, the projection f might be off.
For other 3d plots see: https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html
you could try something like this:
import numpy as np
import matplotlib.pyplot as plt
A = np.array([[4,3], [3,7]])
n_points = 100
u = np.linspace(-4, 4, n_points)
x, y = np.meshgrid(u, u)
X = np.vstack([x.flatten(), y.flatten()])
f_x = np.dot(np.dot(X.T, A), X)
f_x = np.diag(f_x).reshape(n_points, n_points)
plt.figure()
plt.contour(x, y, f_x)
Another alternative is to compute f_x as follows.
f_x = np.zeros((n_points, n_points))
for i in range(n_points):
for j in range(n_points):
in_v = np.array([[x[i][j]], [y[i][j]]])
f_x[i][j] = np.dot(np.dot(in_v.T, A), in_v)

Fitting peaks with Scipy curve_fit, error optimal parameters not found

I recently started with Python because I have an enormous amount of data where I want to automatically fit a Gaussian to the peaks in spectra. Below is an example of three peaks that I want to fit with three individual peaks.
I have found a question where someone is looking for something very similar, How can I fit multiple Gaussian curved to mass spectrometry data in Python?, and adopted it to my script.
I have added my code at the bottom and when I run the last section I get the error "RuntimeError: Optimal parameters not found: Number of calls to function has reached maxfev = 800." What am I missing?
The data can be downloaded at https://www.dropbox.com/s/zowawljcjco70yh/data_so.h5?dl=0
#%%
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import sparse
from scipy.sparse.linalg import spsolve
from scipy.optimize import curve_fit
#%% Read data
path = 'D:/Python/data_so.h5'
f = pd.read_hdf(path, mode = 'r')
t = f.loc[:, 'Time stamp']
d = f.drop(['Time stamp', 'Name spectrum'], axis = 1)
#%% Extract desired wavenumber range
wn_st=2000
wn_ed=2500
ix_st=np.argmin(abs(d.columns.values-wn_st))
ix_ed=np.argmin(abs(d.columns.values-wn_ed))
d = d.iloc[:, ix_st:ix_ed+1]
#%% AsLS baseline correction
spectrum = 230
y = d.iloc[spectrum]
niter = 10
lam = 200000
p = 0.005
L = len(y)
D = sparse.diags([1,-2,1],[0,-1,-2], shape=(L,L-2))
w = np.ones(L)
for i in range(niter):
W = sparse.spdiags(w, 0, L, L)
Z = W + lam * D.dot(D.transpose())
z = spsolve(Z, w*y)
w = p * (y > z) + (1-p) * (y < z)
corr = d.iloc[spectrum,:] - z
#%% Plot spectrum, baseline and corrected spectrum
plt.clf()
plt.plot(d.columns, d.iloc[spectrum,:])
plt.plot(d.columns, z)
plt.plot(d.columns, corr)
plt.gca().invert_xaxis()
plt.show()
#%%
x = d.columns.values
def gauss(x, a, mu, sig):
return a*np.exp(-(x.astype(float)-mu)**2/(2*sig**2))
fitx = x[(x>2232)*(x<2252)]
fity = y[(x>2232)*(x<2252)]
mu=np.sum(fitx*fity)/np.sum(fity)
sig=np.sqrt(np.sum(fity*(fitx-mu)**2)/np.sum(fity))
popt, pcov = curve_fit(gauss, fitx, fity, p0=[max(fity),mu, sig])
plt.plot(x, gauss(x, popt[0],popt[1],popt[2]), 'r-', label='fit')

Python curve_fit with multiple independent variables

Python's curve_fit calculates the best-fit parameters for a function with a single independent variable, but is there a way, using curve_fit or something else, to fit for a function with multiple independent variables? For example:
def func(x, y, a, b, c):
return log(a) + b*log(x) + c*log(y)
where x and y are the independent variable and we would like to fit for a, b, and c.
You can pass curve_fit a multi-dimensional array for the independent variables, but then your func must accept the same thing. For example, calling this array X and unpacking it to x, y for clarity:
import numpy as np
from scipy.optimize import curve_fit
def func(X, a, b, c):
x,y = X
return np.log(a) + b*np.log(x) + c*np.log(y)
# some artificially noisy data to fit
x = np.linspace(0.1,1.1,101)
y = np.linspace(1.,2., 101)
a, b, c = 10., 4., 6.
z = func((x,y), a, b, c) * 1 + np.random.random(101) / 100
# initial guesses for a,b,c:
p0 = 8., 2., 7.
print(curve_fit(func, (x,y), z, p0))
Gives the fit:
(array([ 9.99933937, 3.99710083, 6.00875164]), array([[ 1.75295644e-03, 9.34724308e-05, -2.90150983e-04],
[ 9.34724308e-05, 5.09079478e-06, -1.53939905e-05],
[ -2.90150983e-04, -1.53939905e-05, 4.84935731e-05]]))
optimizing a function with multiple input dimensions and a variable number of parameters
This example shows how to fit a polynomial with a two dimensional input (R^2 -> R) by an increasing number of coefficients. The design is very flexible so that the callable f from curve_fit is defined once for any number of non-keyword arguments.
minimal reproducible example
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def poly2d(xy, *coefficients):
x = xy[:, 0]
y = xy[:, 1]
proj = x + y
res = 0
for order, coef in enumerate(coefficients):
res += coef * proj ** order
return res
nx = 31
ny = 21
range_x = [-1.5, 1.5]
range_y = [-1, 1]
target_coefficients = (3, 0, -19, 7)
xs = np.linspace(*range_x, nx)
ys = np.linspace(*range_y, ny)
im_x, im_y = np.meshgrid(xs, ys)
xdata = np.c_[im_x.flatten(), im_y.flatten()]
im_target = poly2d(xdata, *target_coefficients).reshape(ny, nx)
fig, axs = plt.subplots(2, 3, figsize=(29.7, 21))
axs = axs.flatten()
ax = axs[0]
ax.set_title('Unknown polynomial P(x+y)\n[secret coefficients: ' + str(target_coefficients) + ']')
sm = ax.imshow(
im_target,
cmap = plt.get_cmap('coolwarm'),
origin='lower'
)
fig.colorbar(sm, ax=ax)
for order in range(5):
ydata=im_target.flatten()
popt, pcov = curve_fit(poly2d, xdata=xdata, ydata=ydata, p0=[0]*(order+1) )
im_fit = poly2d(xdata, *popt).reshape(ny, nx)
ax = axs[1+order]
title = 'Fit O({:d}):'.format(order)
for o, p in enumerate(popt):
if o%2 == 0:
title += '\n'
if o == 0:
title += ' {:=-{w}.1f} (x+y)^{:d}'.format(p, o, w=int(np.log10(max(abs(p), 1))) + 5)
else:
title += ' {:=+{w}.1f} (x+y)^{:d}'.format(p, o, w=int(np.log10(max(abs(p), 1))) + 5)
title += '\nrms: {:.1f}'.format( np.mean((im_fit-im_target)**2)**.5 )
ax.set_title(title)
sm = ax.imshow(
im_fit,
cmap = plt.get_cmap('coolwarm'),
origin='lower'
)
fig.colorbar(sm, ax=ax)
for ax in axs.flatten():
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.show()
P.S. The concept of this answer is identical to my other answer here, but the code example is way more clear. At the time given, I will delete the other answer.
Fitting to an unknown numer of parameters
In this example, we try to reproduce some measured data measData.
In this example measData is generated by the function measuredData(x, a=.2, b=-2, c=-.8, d=.1). I practice, we might have measured measData in a way - so we have no idea, how it is described mathematically. Hence the fit.
We fit by a polynomial, which is described by the function polynomFit(inp, *args). As we want to try out different orders of polynomials, it is important to be flexible in the number of input parameters.
The independent variables (x and y in your case) are encoded in the 'columns'/second dimension of inp.
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def measuredData(inp, a=.2, b=-2, c=-.8, d=.1):
x=inp[:,0]
y=inp[:,1]
return a+b*x+c*x**2+d*x**3 +y
def polynomFit(inp, *args):
x=inp[:,0]
y=inp[:,1]
res=0
for order in range(len(args)):
print(14,order,args[order],x)
res+=args[order] * x**order
return res +y
inpData=np.linspace(0,10,20).reshape(-1,2)
inpDataStr=['({:.1f},{:.1f})'.format(a,b) for a,b in inpData]
measData=measuredData(inpData)
fig, ax = plt.subplots()
ax.plot(np.arange(inpData.shape[0]), measData, label='measuered', marker='o', linestyle='none' )
for order in range(5):
print(27,inpData)
print(28,measData)
popt, pcov = curve_fit(polynomFit, xdata=inpData, ydata=measData, p0=[0]*(order+1) )
fitData=polynomFit(inpData,*popt)
ax.plot(np.arange(inpData.shape[0]), fitData, label='polyn. fit, order '+str(order), linestyle='--' )
ax.legend( loc='upper left', bbox_to_anchor=(1.05, 1))
print(order, popt)
ax.set_xticklabels(inpDataStr, rotation=90)
Result:
Yes. We can pass multiple variables for curve_fit. I have written a piece of code:
import numpy as np
x = np.random.randn(2,100)
w = np.array([1.5,0.5]).reshape(1,2)
esp = np.random.randn(1,100)
y = np.dot(w,x)+esp
y = y.reshape(100,)
In the above code I have generated x a 2D data set in shape of (2,100) i.e, there are two variables with 100 data points. I have fit the dependent variable y with independent variables x with some noise.
def model_func(x,w1,w2,b):
w = np.array([w1,w2]).reshape(1,2)
b = np.array([b]).reshape(1,1)
y_p = np.dot(w,x)+b
return y_p.reshape(100,)
We have defined a model function that establishes relation between y & x.
Note: The shape of output of the model function or predicted y should be (length of x,)
popt, pcov = curve_fit(model_func,x,y)
The popt is an 1D numpy array containing predicted parameters. In our case there are 3 parameters.
Yes, there is: simply give curve_fit a multi-dimensional array for xData.

Need help to compute derivative and integral using Numpy

I need help to compute derivative and integral of function using finite difference method and Numpy without using loops.
The whole task: Tabulate Gauss function f(x) = (1./(sqrt(2.*pi)*s))*e**(-0.5*((x-m)/s)**2) on the interval [-10,10] for m = 0 and s=[0.5,5]. Compute derivative and integral of the function using finite difference method without using loops. Create plots of the function and its derivative. Use Numpy and Matplotlib.
Here's the beginning of the programm:
def f(x,s,m):
return (1./(sqrt(2.*pi)*s))*e**(-0.5*((x-m)/s)**2)
def main():
m = 0
s = np.linspace(0.5,5,3)
x = np.linspace(-10,10,20)
for i in range(3):
print('s = ', s[i])
for j in range(20):
f(x[j],s[i],m)
print('x = ',x[j],', y = ',f(x[j],s[i],m))
By using the numpy arrays you can apply that operation directly with algebraic notation:
result = (1./(np.sqrt(2.*np.pi)*s))*np.exp(-0.5*((x-m)/s)**2)
The simplest way (without using SciPy) seems to me to directly sum for the integral and central difference method for the derivative:
import numpy as np
import pylab
def gaussian(x, s, m):
return 1./(np.sqrt(2.*np.pi)*s) * np.exp(-0.5*((x-m)/s)**2)
m = 0
s = np.linspace(0.5,5,3)
x, dx = np.linspace(-10,10,1000, retstep=True)
x = x[:,np.newaxis]
y = gaussian(x,s,m)
h = 1.e-6
dydx = (gaussian(x+h, s, m) - gaussian(x-h, s, m))/2/h
int_y = np.sum(gaussian(x, s, m), axis=0) * dx
print(int_y)
pylab.plot(x, y)
pylab.plot(x, dydx)
pylab.show()

Categories