I have some function f(list) with receives as argument a list with length 2, i.e., list = [entry_1, entry_2]. I need to do a contour plot of this function:
x = np.linspace(0, 2, 1000+1)
y = np.linspace(0, 2, 1000+1)
X, Y = np.meshgrid(x, y)
Z = ?
plt.contour(X, Y, Z)
plt.show()
The problem is: I don't know how to pass the arguments. If the function was of the type f(x,y), then
Z = f(X, Y)
would do the job. But
Z = f([X,Y])
fails: it receives too many arguments. How can I do this?
EDIT: Here are the functions of the program:
from scipy.optimize import minimize
def c_Gamma_gamma_fv(cf, cv):
return np.abs((4 * eta_gamma * charges**2 * a_q * cf).sum() + 4.* cf *a_tau/3. + a_w * cv)**2/Gamma_gamma
def mu_fv(cf, cv):
return np.array([cf**4,
cf**2 * cv**2,
cf**2 * c_Gamma_gamma_fv(cf, cv),
cv**2 * c_Gamma_gamma_fv(cf, cv),
cf**4,
cv**2 * cf**2,
cf**2 * cv**2,
cv**4,
cv**2 * cf**2,
cv**4])
def chi_square_fv(clist):
cf, cv = clist
return ((mu_fv(cf, cv) - mu_data) # inv_cov # (mu_fv(cf, cv) - mu_data))
x0 = [1., 1.]
res_fv = minimize(chi_square_fv, x0)
print(res_fv)
def delta_chi_fv(clist):
return chi_square_fv(clist) - chi_square_fv([res_fv.x[0], res_fv.x[1]])
All variables not explicit are constants. The function I want to plot is delta_chi_fv.
The solution I found (with a little help of a professor here in my university), though not the fastest one, works:
Z = np.zeros((len(x), len(y)))
for i in range(len(x)):
for j in range(len(y)):
z = delta_chi_fv([x[i], y[j]])
Z[i,j] = z
With this construction of Z, then
plt.contour(X,Y,Z)
works fine. If anyone knows another answer, it would be great to learn more things about this language.
Cheers,
Gabriel.
Related
I am new to Python, and I am trying to make a numerical analysis model of differential equations.
import sympy as sympy
def picard_solver(y_0, x_0, rhs_expression, iteration_count:int = 5):
x, phi = sympy.symbols("x phi")
phi = x_0
for i in range(iteration_count + 1):
phi = y_0 + sympy.integrate(rhs_expression(x, phi), (x, x_0, x))
return phi
import numpy
import plotly.graph_objects as go
y_set = [picard_solver(1, 0, lambda x, y: x * y, i) for i in range(1, 6)]
x_grid = numpy.linspace(-2, 2, 1000)
y_picard = list()
for y in y_set:
y_picard.append(numpy.array([float(y.evalf(subs={x: x_i})) for x_i in x_grid]))
y_exact = numpy.exp((x_grid) * (x_grid) / 2)
fig = go.Figure()
for i, y_order in enumerate(y_picard):
fig.add_trace(go.Scatter(x=x_grid, y=y_order, name=f"Picard Order {i + 1}"))
# fig.add_trace(go.Scatter(x=x_grid, y=y_picard, name="Picard Solution"))
fig.add_trace(go.Scatter(x=x_grid, y=y_exact, name="Exact Solution"))
fig.show()
fig.write_html("picard_vs_exact.html")
But when I try to run it, I get NameError: name 'x' is not defined error, can someone help me?
I want a graph to be shown.
I think that you need to pass a string in the y.evalf(subs={'x': x_i}) part of your code.
I have implemented the following explicit euler method in python:
def explicit_euler(df, x0, h, N):
"""Solves an ODE IVP using the Explicit Euler method.
Keyword arguments:
df - The derivative of the system you wish to solve.
x0 - The initial value of the system you wish to solve.
h - The step size.
N - The number off steps.
"""
x = np.zeros(N)
x[0] = x0
for i in range(0, N-1):
x[i+1] = x[i] + h * df(x[i])
return x
Following the article on wikipedia I can plot the function and verify that I get the same plot: . I believe that here the method I have written is working correctly.
Next I tried to use it to solve the last system given on this page and instead of the plot shown there I obtain this:
I am not sure why my plot doesn't match the one shown on the webpage. The explicit euler method seems to work fine when I use it to solve systems where the slope doesn't change, but for an oscillating function it never seems to mimic it at all. Not even showing the expected error gain as indicated on the linked webpage. I am not sure what is wrong with the method I have implemented.
Here is the code used for plotting and the derivative:
def g(t):
return -0.5 * np.exp(t * 0.5) * np.sin(5 * t) + 5 * np.exp(t * 0.5)
* np.cos(5 * t)
h = 0.001
x0 = 0
tn = 4
N = int(tn / h)
x = ee.explicit_euler(f, x0, h, N)
t = np.arange(0, tn, h)
fig = plt.figure()
plt.plot(t, x, label="Explicit Euler")
plt.plot(t, (np.exp(0.5 * t) * np.sin(5 * t)), label="Analytical
solution")
#plt.plot(t, np.exp(0.5 * t), label="Analytical solution")
plt.xlabel('Timesteps t')
plt.ylabel('x(t)=e^(0.5*t) * sin(5*t)')
plt.legend()
plt.grid()
plt.show()
Edit:
As requested here is the current equation I am applying the method to:
y'-y=-0.5*e^(t/2)*sin(5t)+5e^(t/2)*cos(5t)
Where y(0)=0.
I would like to make clear however that this behaviour doesn't occur just for this equation but all equations where the slope has a change in sign, or oscillating behaviour.
Edit 2:
Ok thanks. Yes the code below does indeed work. But I have one further question. In the simple example I had for the exponential function, I had defined a method:
def f(x):
return x
for the system f'(x)=x. This gave the output of my first graph which looks correct. I then defined another function:
def k(x):
return cos(x)
for the system f'(x)=cos(x), this does not give expected output. But when I change the function definition to
def k(t, x):
return cos(t)
I get the expected output. If I change my function
def f(t, x):
return t
I get an incorrect output. Am I always actually evaluating the function at a time step and is it just by chance for the system x'=x that at each time step the value is just the value of x?
I had understood that the Euler method used the value of the previously calculated value in order to get the next value. But if I run code for my function k(x)=cos(x), I get output pictured below, which must be incorrect. This now uses the updated code you provided.
def k(t, x):
return np.cos(x)
h = 0.1 # Step size
x0 = (0, 0) # Initial point of iteration
tn = 10 # Time step to iterate to
N = int(tn / h) # Number of steps
x = ee.explicit_euler(k, x0, h, N)
t = np.arange(0, tn, h)
The problem is that you have incorrectly raised the function g, you want to solve the equation:
From where we observe that:
y' = y -0.5*e^(t/2)*sin(5t)+5e^(t/2)*cos(5t)
Then we define the function f(t, y) = y -0.5*e^(t/2)*sin(5t)+5e^(t/2)*cos(5t) as:
def f(t, y):
return y -0.5 * np.exp(t * 0.5) * np.sin(5 * t) + 5 * np.exp(t * 0.5) * np.cos(5 * t)
The initial point of iteration is f0=(t(0), y(0)):
f0 = (0, 0)
Then from Euler's equations:
def explicit_euler(df, x0, h, N):
"""Solves an ODE IVP using the Explicit Euler method.
Keyword arguments:
df - The derivative of the system you wish to solve.
x0 - The initial value of the system you wish to solve.
h - The step size.
N - The number off steps.
"""
x = np.zeros(N)
t, x[0] = x0
for i in range(0, N-1):
x[i+1] = x[i] + h * df(t ,x[i])
t += h
return x
Complete Code:
def explicit_euler(df, x0, h, N):
"""Solves an ODE IVP using the Explicit Euler method.
Keyword arguments:
df - The derivative of the system you wish to solve.
x0 - The initial value of the system you wish to solve.
h - The step size.
N - The number off steps.
"""
x = np.zeros(N)
t, x[0] = x0
for i in range(0, N-1):
x[i+1] = x[i] + h * df(t ,x[i])
t += h
return x
def df(t, y):
return -0.5 * np.exp(t * 0.5) * np.sin(5 * t) + 5 * np.exp(t * 0.5) * np.cos(5 * t) + y
h = 0.001
f0 = (0, 0)
tn = 4
N = int(tn / h)
x = explicit_euler(df, f0, h, N)
t = np.arange(0, tn, h)
fig = plt.figure()
plt.plot(t, x, label="Explicit Euler")
plt.plot(t, (np.exp(0.5 * t) * np.sin(5 * t)), label="Analytical solution")
#plt.plot(t, np.exp(0.5 * t), label="Analytical solution")
plt.xlabel('Timesteps t')
plt.ylabel('x(t)=e^(0.5*t) * sin(5*t)')
plt.legend()
plt.grid()
plt.show()
Screenshot:
Dump y' and what is on the right side is what you should place in the df function.
We will modify the variables to maintain the same standard for the variables, and will y be the dependent variable, and t the independent variable.
Equation 2: In this case the equation f'(x)=cos(x) will be rewritten to:
y'=cos(t)
Then:
def df(t, y):
return np.cos(t)
In conclusion, if we have an equation of the following form:
y' = f(t, y)
Then:
def df(t, y):
return f(t, y)
I want to use scipy.optimize.broyden2, the problem is that my function doesn't just take an array as argument, but a lot more parameters.
What should I do? Define global variables?
These are my functions:
def F(S, I, R, alpha, beta):
return [- beta * S * I, beta * S * I - alpha * R, alpha * R]
def euler(xi, xf, m, F, initial_values, alpha, beta):
h = (xf - xi) / m
t = np.linspace(xi, xf, m + 1)
t = np.delete(t, 0)
vect_y = [initial_values[0], initial_values[1], initial_values[2]]
for i in range(len(t)):
y_actual = [sum(x) for x in zip(vect_y, [element * h for element in F(vect_y[0], vect_y[1], vect_y[2], alpha, beta)])]
vect_y = y_actual
return vect_y
I want to use broyden2 with euler, where x0 would be initial_values.
As was suggested in comments, you can use an auxiliary function that unpacks the list of arguments using *list syntax, and calls your main function with that. A minimal example is shown below, where f is the function whose root is being found.
from scipy.optimize import broyden2
def f(x, y, z):
return [x-1, y-2, z-3]
broyden2(lambda X: f(*X), [0, 0, 0])
Output: array([ 1., 2., 3.])
I already opened a question on this topic, but I wasn't sure, if I should post it there, so I opened a new question here.
I have trouble again when fitting two or more peaks. First problem occurs with a calculated example function.
xg = np.random.uniform(0,1000,500)
mu1 = 200
sigma1 = 20
I1 = -2
mu2 = 800
sigma2 = 20
I2 = -1
yg3 = 0.0001*xg
yg1 = (I1 / (sigma1 * np.sqrt(2 * np.pi))) * np.exp( - (xg - mu1)**2 / (2 * sigma1**2) )
yg2 = (I2 / (sigma2 * np.sqrt(2 * np.pi))) * np.exp( - (xg - mu2)**2 / (2 * sigma2**2) )
yg=yg1+yg2+yg3
plt.figure(0, figsize=(8,8))
plt.plot(xg, yg, 'r.')
I tried two different approaches, I found in the documentation, which are shown below (modified for my data), but both give me wrong fitting data and a messy chaos of graphs (I guess one line per fitting step).
1st attempt:
import numpy as np
from lmfit.models import PseudoVoigtModel, LinearModel, GaussianModel, LorentzianModel
import sys
import matplotlib.pyplot as plt
gauss1 = PseudoVoigtModel(prefix='g1_')
pars.update(gauss1.make_params())
pars['g1_center'].set(200)
pars['g1_sigma'].set(15, min=3)
pars['g1_amplitude'].set(-0.5)
pars['g1_fwhm'].set(20, vary=True)
#pars['g1_fraction'].set(0, vary=True)
gauss2 = PseudoVoigtModel(prefix='g2_')
pars.update(gauss2.make_params())
pars['g2_center'].set(800)
pars['g2_sigma'].set(15)
pars['g2_amplitude'].set(-0.4)
pars['g2_fwhm'].set(20, vary=True)
#pars['g2_fraction'].set(0, vary=True)
mod = gauss1 + gauss2 + LinearModel()
pars.add('intercept', value=0, vary=True)
pars.add('slope', value=0.0001, vary=True)
init = mod.eval(pars, x=xg)
out = mod.fit(yg, pars, x=xg)
print(out.fit_report(min_correl=0.5))
plt.figure(5, figsize=(8,8))
out.plot_fit()
When I include the 'fraction'-parameter, I often get
'NameError: name 'pv1_fraction' is not defined in expr='<_ast.Module object at 0x00000000165E03C8>'.
although it should be defined. I get this Error for real data with this approach, too.
2nd attempt:
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import lmfit
def gauss(x, sigma, mu, A):
return A*np.exp(-(x-mu)**2/(2*sigma**2))
def linear(x, m, n):
return m*x + n
peak1 = lmfit.model.Model(gauss, prefix='p1_')
peak2 = lmfit.model.Model(gauss, prefix='p2_')
lin = lmfit.model.Model(linear, prefix='l_')
model = peak1 + lin + peak2
params = model.make_params()
params['p1_mu'] = lmfit.Parameter(value=200, min=100, max=250)
params['p2_mu'] = lmfit.Parameter(value=800, min=100, max=1000)
params['p1_sigma'] = lmfit.Parameter(value=15, min=0.01)
params['p2_sigma'] = lmfit.Parameter(value=20, min=0.01)
params['p1_A'] = lmfit.Parameter(value=-2, min=-3)
params['p2_A'] = lmfit.Parameter(value=-2, min=-3)
params['l_m'] = lmfit.Parameter(value=0)
params['l_n'] = lmfit.Parameter(value=0)
out = model.fit(yg, params, x=xg)
print out.fit_report()
plt.figure(8, figsize=(8,8))
out.plot_fit()
So the result looks like this, in both cases. It seems to plot all fitting attempts, but never solves it correctly. The best fitted parameters are in the range that I gave it.
Anyone knows this type of error? Or has any solutions for this? And does anyone know how to avoid the NameError when calling a model function from lmfit with those approaches?
I have a somewhat tolerable solution for you. Since I don't know how variable your data is, I cannot say that it will work in a general sense but should get you started. If your data is along 0-1000 and has two peaks or dips along a line as you showed, then it should work.
I used the scipy curve_fit and put all of the components of the function together into one function. One can pass starting locations into the curve_fit function. (you can probably do this with the lib you're using but I'm not familiar with it) There is a loop in loop where I vary the mu parameters to find the ones with the lowest squared error. If you are needing to fit your data many times or in some real-time scenario then this is not for you but if you just need to fit some data, launch this code and grab a coffee.
from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
import pylab
from matplotlib import cm as cm
import time
def my_function_big(x, m, n, #lin vars
sigma1, mu1, I1, #gaussian 1
sigma2, mu2, I2): #gaussian 2
y = m * x + n + (I1 / (sigma1 * np.sqrt(2 * np.pi))) * np.exp( - (x - mu1)**2 / (2 * sigma1**2) ) + (I2 / (sigma2 * np.sqrt(2 * np.pi))) * np.exp( - (x - mu2)**2 / (2 * sigma2**2) )
return y
#make some data
xs = np.random.uniform(0,1000,500)
mu1 = 200
sigma1 = 20
I1 = -2
mu2 = 800
sigma2 = 20
I2 = -1
yg3 = 0.0001 * xs
yg1 = (I1 / (sigma1 * np.sqrt(2 * np.pi))) * np.exp( - (xs - mu1)**2 / (2 * sigma1**2) )
yg2 = (I2 / (sigma2 * np.sqrt(2 * np.pi))) * np.exp( - (xs - mu2)**2 / (2 * sigma2**2) )
ys = yg1 + yg2 + yg3
xs = np.array(xs)
ys = np.array(ys)
#done making data
#start a double loop...very expensive but this is quick and dirty
#it would seem that the regular optimizer has trouble finding the minima so i
#found that having the near proper mu values helped it zero in much better
start = time.time()
serr = []
_x = []
_y = []
for x in np.linspace(0, 1000, 61):
for y in np.linspace(0, 1000, 61):
cfiti = curve_fit(my_function_big, xs, ys, p0=[0, 0, 1, x, 1, 1, y, 1], maxfev=20000000)
serr.append(np.sum((my_function_big(xs, *cfiti[0]) - ys) ** 2))
_x.append(x)
_y.append(y)
serr = np.array(serr)
_x = np.array(_x)
_y = np.array(_y)
print 'done loop in loop fitting'
print 'time: %0.1f' % (time.time() - start)
gridsize=20
plt.subplot(111)
plt.hexbin(_x, _y, C=serr, gridsize=gridsize, cmap=cm.jet, bins=None)
plt.axis([_x.min(), _x.max(), _y.min(), _y.max()])
cb = plt.colorbar()
cb.set_label('SE')
plt.show()
ix = np.argmin(serr.ravel())
mustart1 = _x.ravel()[ix]
mustart2 = _y.ravel()[ix]
print mustart1
print mustart2
cfit = curve_fit(my_function_big, xs, ys, p0=[0, 0, 1, mustart1, 1, 1, mustart2, 1], maxfev=2000000000)
xp = np.linspace(0, 1000, 1001)
plt.figure()
plt.scatter(xs, ys) #plot synthetic dat
plt.plot(xp, my_function_big(xp, *cfit[0]), '-', label='fit function') #plot data evaluated along 0-1000
plt.legend(loc=3, numpoints=1, prop={'size':12})
plt.show()
pylab.close()
Good luck!
In your first attempt:
pars['g1_fraction'].set(0, vary=True)
The fraction must be a value between 0 and 1, but I believe that cannot be zero. Try to put something like 0.000001, and it will work.
I've trying to simulate a 2D Sérsic profile and then testing an extraction routine on it. However, when I do a test by extracting all the points lying along an ellipse supposedly aligned with an image, I get a periodic function. It is meant to be a straight line since all points along the ellipse should have equal intensity, although there will be a small amount of deviation due to rounding errors in the rough coordinate estimation (get_I()).
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import NearestNDInterpolator
def rotate(x, y, angle):
x1 = x*np.cos(angle) + y*np.sin(angle)
y1 = y*np.cos(angle) - x*np.sin(angle)
return x1, y1
def sersic_1d(R, mu0, h, n, zp=0):
exponent = (R / h) ** (1 / n)
I0 = np.exp((zp - mu0) / 2.5)
return I0 * np.exp(-1.* exponent)
def sersic_2d(x, y, e, i, mu0, h, n, zp=0):
xp, yp = rotate(x, y, i)
alpha = np.arctan2(yp, xp * (1-e))
a = xp / np.cos(alpha)
b = a * (1 - e)
# R2 = (a*a) + ((1 - (e*e)) * yp*yp)
return sersic_1d(a, mu0, h, n, zp)
def ellipse(x0, y0, a, e, i, theta):
b = a * (1 - e)
x = a * np.cos(theta)
y = b * np.sin(theta)
x, y = rotate(x, y, i)
return x + x0, y + y0
def get_I(x, y, Z):
return Z[np.round(x).astype(int), np.round(y).astype(int)]
if __name__ == '__main__':
n = np.linspace(-100,100,1000)
nx, ny = np.meshgrid(n, n)
Z = sersic_2d(nx, ny, 0.5, 0., 0, 50, 1, 25)
theta = np.linspace(0, 2*np.pi, 1000.)
a = 100.
e = 0.5
i = np.pi / 4.
x, y = ellipse(0, 0, a, e, i, theta)
I = get_I(x, y, Z)
plt.plot(I)
# plt.imshow(Z)
plt.show()
However, What I actually get is a massive periodic function. I've checked the alignment and it's correct and the float-> int rounding errors can't account for this kind of shift?
Any ideas?
There are two things that strike me as odd, one of which for sure is not what you wanted, the other I'm not sure about because astronomy is not my field of expertise.
The first is in your function get_I:
def get_I(x, y, Z):
return Z[np.round(x).astype(int), np.round(y).astype(int)]
When you call that function, x an y outline an ellipse, with its center at the origin (0,0). That means x and y both become negative at some point. The indexing you perfom in that function will then take values from the array's last elements, because Z[0,0] is in fact the top left corner of the image (which you plotted, but commented), while Z[-1, -1] is the bottom right corner. What you want is to take the values of Z that are on the ellipse contour, but both have to have the same center. To do that, you would first make sure you use an uneven amount of samples for n (which ultimately defines the shape of Z) and second, you would add an indexing offset:
def get_I(x, y, Z):
offset = Z.shape[0]//2
return Z[np.round(y).astype(int) + offset, np.round(x).astype(int) + offset]
...
n = np.linspace(-100,100,1001) # changed from 1000 to 1001 to ensure a point of origin is present and that the image exhibits point symmetry
Also notice that I changed the order of y and x in get_I: that's because you first index along the rows (for which we usually take the y-coordinate) and only then along the columns (which map to the x-coordinate in most conventions).
The second item that struck me as unusual is that your ellipse has its axes at an angle of pi/4 with respect to the horizontal axis, whereas your sersic (which maps to the 2D array of Z) does not have a tilt at all.
Changing all that, I end up with this code:
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
def rotate(x, y, angle):
x1 = x*np.cos(angle) + y*np.sin(angle)
y1 = y*np.cos(angle) - x*np.sin(angle)
return x1, y1
def sersic_1d(R, mu0, h, n, zp=0):
exponent = (R / h) ** (1 / n)
I0 = np.exp((zp - mu0) / 2.5)
return I0 * np.exp(-1.* exponent)
def sersic_2d(x, y, e, ang, mu0, h, n, zp=0):
xp, yp = rotate(x, y, ang)
alpha = np.arctan2(yp, xp * (1-e))
a = xp / np.cos(alpha)
b = a * (1 - e)
return sersic_1d(a, mu0, h, n, zp)
def ellipse(x0, y0, a, e, i, theta):
b = a * (1 - e) # half of a
x = a * np.cos(theta)
y = b * np.sin(theta)
x, y = rotate(x, y, i) # rotated by 45deg
return x + x0, y + y0
def get_I(x, y, Z):
offset = Z.shape[0]//2
return Z[np.round(y).astype(int) + offset, np.round(x).astype(int) + offset]
#return Z[np.round(y).astype(int), np.round(x).astype(int)]
if __name__ == '__main__':
n = np.linspace(-100,100,1001) # changed
nx, ny = np.meshgrid(n, n)
ang = 0;#np.pi / 4.
Z = sersic_2d(nx, ny, 0.5, ang=0, mu0=0, h=50, n=1, zp=25)
f, ax = plt.subplots(1,2)
dn = n[1]-n[0]
ax[0].imshow(Z, cmap='gray', aspect='equal', extent=[-100-dn/2, 100+dn/2, -100-dn/2, 100+dn/2])
theta = np.linspace(0, 2*np.pi, 1000.)
a = 20. # decreased long axis of ellipse to see the intensity-map closer to the "center of the galaxy"
e = 0.5
x, y = ellipse(0,0, a, e, ang, theta)
I = get_I(x, y, Z)
ax[0].plot(x,y) # easier to see where you want the intensities
ax[1].plot(I)
plt.show()
and this image:
The intensity variations look like quantisation noise to me, with the exception of the peaks, which are due to the asymptote in sersic_1d.