Summary:
I have a function I want to put through curve_fit that is piecewise, but whose pieces do not have a clean analytical form. I've gotten the process to work by including a somewhat janky for loop, but it runs pretty slowly: 1-2 minutes for relatively sets of data with N=10,000. I'm hoping for advice on how to (a) use numpy broadcasting to speed up the operation (no for loop) or (b) do something totally differently that gets the same type of results, but much faster.
The function z=f(x,y; params) I'm working with is piecewise, monotonically increasing in y until it reaches the maximum value of f, at point y_0, and then saturates and becomes constant. My problem is that the break-point y_0 is not analytic, so requires some optimization. This would be relatively easy, except that I have real data in both x and y, and the break-point is a function of the fitting parameter c. All the data, x, y, and z have instrumentation noise.
Example problem:
The function below has been changed to make it easier to illustrate the problem I'm trying to deal with. Yes, I realize it is analytically solvable, but my real problem is not.
f(x,y; c) =
y*(x-c*y), for y <= x/(2*c)
x**2/(4*c), for y > x/(2*c)
The break-point y_0 = x/(2*c) is found by taking the derivative of f WRT y and solving for the maximum. The maximum f_max=x**2/(4*c) is found by putting y_0 back into f. The problem is is that the break-point is dependent on both the x-value and the fitting parameter c, so I can't compute the breakpoint outside the internal loop.
Code
I have reduced the number of points to ~500 points to allow the code to run in a reasonable amount of time. My real data has >10,000 points.
import numpy as np
from scipy.optimize import curve_fit,fminbound
import matplotlib.pyplot as plt
def function((x,y),c=1):
fxn_xy = lambda x,y: y*(x-c*y)
y_max = np.zeros(len(x)) #create array in which to put max values of y
fxn_max = np.zeros(len(x)) #array to put the results
'''This loop is the first part I'd like to optimize, since it requires
O(1/delta) time for each individual pass through the fitting function'''
for i,X in enumerate(x):
'''X represents specific value of x to solve for'''
fxn_y = lambda y: fxn_xy(X,y)
#reduce function to just one variable (y)
#by inputting given X value for the loop
max_y = fminbound(lambda Y: -fxn_y(Y), 0, X, full_output=True)
y_max[i] = max_y[0]
fxn_max[i] = -max_y[1]
return np.where(y<=y_max,
fxn_xy(x,y),
fxn_max
)
''' Create and plot 'real' data '''
delta = 0.01
xs = [0.5,1.,1.5,2.] #num copies of each X value
y = []
#create repeated x for each xs value. +1 used to match size of y, below
x = np.hstack([X]*(int(X//delta+1)) for X in xs)
#create sweep from 1 to the current value of x, with spacing=delta
y = np.hstack([np.arange(0, X, delta) for X in xs])
z = function((x,y),c=0.75)
#introduce random instrumentation noise to x,y,z
x += np.random.normal(0,0.005,size=len(x))
y += np.random.normal(0,0.005,size=len(y))
z += np.random.normal(0,0.05,size=len(z))
fig = plt.figure(1,(12,8))
axis1 = fig.add_subplot(121)
axis2 = fig.add_subplot(122)
axis1.scatter(y,x,s=1)
#axis1.plot(x)
#axis1.plot(z)
axis1.set_ylabel("x value")
axis1.set_xlabel("y value")
axis2.scatter(y,z, s=1)
axis2.set_xlabel("y value")
axis2.set_ylabel("z(x,y)")
'''Curve Fitting process to find optimal c'''
popt, pcov = curve_fit(function, (x,y),z,bounds=(0,2))
axis2.scatter(y, function((x,y),*popt), s=0.5, c='r')
print "c_est = {:.3g} ".format(popt[0])
The results are plotted below, with "real" x,y,z values (blue) and fitted values (red).
Notes: my intuition is figuring out how to broadcast the x variable s.t. I can use it in fminbound. But that may be naive. Thoughts?
Thanks everybody!
Edit: to clarify, the x-values are not always fixed in groups like that and could instead be swept as the y-values are held steady. Which is unfortunately why I need some way of dealing with x so many times.
There are several things that can be optimised. One problem is the data structure. If I understand the code correctly, you look for the max for all x. However, you made the structure such that the same values are repeated over and over again. Hence, here you waste a lot of computational effort.
I am not sure how difficult the evaluation of f is in reality, but I assume that it is not significantly more costly than the optimization. So in my solution I just calculate the full array, look for the maximum, and change the values coming afterwards.
I guess my code can be optimized as well, but right now it looks like:
import numpy as np
from scipy.optimize import leastsq
import matplotlib.pyplot as plt
def function( yArray, x=1, c=1 ):
out = np.fromiter( ( y * ( x - c * y ) for y in yArray ), np.float )
pos = np.argmax( out )
outMax = out[ pos ]
return np.fromiter( ( x if i < pos else outMax for i, x in enumerate( out ) ), np.float )
def residuals( param, xArray, yList, zList ):
c = param
zListTheory = [ function( yArray, x=X, c=c ) for yArray, X in zip( yList, xArray ) ]
diffList = [ zArray - zArrayTheory for zArray, zArrayTheory in zip( zList, zListTheory ) ]
out = [ item for diffArray in diffList for item in diffArray ]
return out
''' Create and plot 'real' data '''
delta = 0.01
xArray = np.array( [ 0.5, 1., 1.5, 2. ] ) #keep this as parameter
#create sweep from 1 to the current value of x, with spacing=delta
yList = [ np.arange( 0, X, delta ) for X in xArray ] ## as list of arrays
zList = [ function( yArray, x=X, c=0.75 ) for yArray, X in zip( yList, xArray ) ]
fig = plt.figure( 1, ( 12, 8 ) )
ax = fig.add_subplot( 1, 1, 1 )
for y,z in zip( yList, zList ):
ax.plot( y, z )
#introduce random instrumentation noise
yRList =[ yArray + np.random.normal( 0, 0.02, size=len( yArray ) ) for yArray in yList ]
zRList =[ zArray + np.random.normal( 0, 0.02, size=len( zArray ) ) for zArray in zList ]
ax.set_prop_cycle( None )
for y,z in zip( yRList, zRList ):
ax.plot( y, z, marker='o', markersize=2, ls='' )
sol, cov, info, msg, ier = leastsq( residuals, x0=.9, args=( xArray, yRList, zRList ), full_output=True )
print "c_est = {:.3g} ".format( sol[0] )
plt.show()
providing
>> c_est = 0.752
With the original graphs and noisy data being
Related
From https://stackoverflow.com/a/30460089/2202107, we can generate CDF of a normal distribution:
import numpy as np
import matplotlib.pyplot as plt
N = 100
Z = np.random.normal(size = N)
# method 1
H,X1 = np.histogram( Z, bins = 10, normed = True )
dx = X1[1] - X1[0]
F1 = np.cumsum(H)*dx
#method 2
X2 = np.sort(Z)
F2 = np.array(range(N))/float(N)
# plt.plot(X1[1:], F1)
plt.plot(X2, F2)
plt.show()
Question: How do we generate the "original" normal distribution, given only x (eg X2) and y (eg F2) coordinates?
My first thought was plt.plot(x,np.gradient(y)), but gradient of y was all zero (data points are evenly spaced in y, but not in x) These kind of data is often met in percentile calculations. The key is to get the data evenly space in x and not in y, using interpolation:
x=X2
y=F2
num_points=10
xinterp = np.linspace(-2,2,num_points)
yinterp = np.interp(xinterp, x, y)
# for normalizing that sum of all bars equals to 1.0
tot_val=1.0
normalization_factor = tot_val/np.trapz(np.ones(len(xinterp)),yinterp)
plt.bar(xinterp, normalization_factor * np.gradient(yinterp), width=0.2)
plt.show()
output looks good to me:
I put my approach here for examination. Let me know if my logic is flawed.
One issue is: when num_points is large, the plot looks bad, but it's a issue in discretization, not sure how to avoid it.
Related posts:
I failed to understand why the answer was so complicated in https://stats.stackexchange.com/a/6065/131632
I also didn't understand why my approach was different than Generate distribution given percentile ranks
See the edit below for details.
I have a dataset, on which I need to perform and IFFT, cut the valueable part of it (by multiplying with a gaussian curve), then FFT back.
First is in angular frequency domain, so an IFFT leads to time domain. Then FFT-ing back should lead to angular frequency again, but I can't seem to find a solution how to get back the original domain. Of course it's easy on the y-values:
yf = np.fft.ifft(y)
#cut the valueable part there..
np.fft.fft(yf)
For the x-value transforms I'm using np.fft.fftfreq the following way:
# x is in ang. frequency domain, that's the reason for the 2*np.pi division
t = np.fft.fftfreq(len(x), d=(x[1]-x[0])/(2*np.pi))
However doing
x = np.fft.fftfreq(len(t), d=2*np.pi*(t[1]-t[0]))
completely not giving me back the original x values. Is that something I'm misunderstanding?
The question can be asked generalized, for example:
import numpy as np
x = np.arange(100)
xx = np.fft.fftfreq(len(x), d = x[1]-x[0])
# how to get back the original x from xx? Is it even possible?
I've tried to use a temporal variable where I store the original x values, but it's not too elegant. I'm looking for some kind of inverse of fftfreq, and in general the possible best solution for that problem.
Thank you.
EDIT:
I will provide the code at the end.
I have a dataset which has angular frequency on x axis and intensity on the y. I want to perfrom IFFT to change to time domain. Unfortunately the x values are not
evenly spaced, so a (linear) interpolation is needed first before IFFT. Then in time domain the transform looks like this:
The next step is to cut one of the symmetrical spikes with a gaussian curve, then FFT back to angular frequency domain (the same where we started). My problem is when I transfrom the x-axis for the IFFT (which I think is correct), I can't get back into the original angular frequency domain. Here is the code, which includes the generator for the dataset too.
import numpy as np
import matplotlib.pyplot as plt
import scipy
from scipy.interpolate import interp1d
C_LIGHT = 299.792
# for easier case, this is zero, so it can be ignored.
def _disp(x, GD=0, GDD=0, TOD=0, FOD=0, QOD=0):
return x*GD+(GDD/2)*x**2+(TOD/6)*x**3+(FOD/24)*x**4+(QOD/120)*x**5
# the generator to make sample datasets
def generator(start, stop, center, delay, GD=0, GDD=0, TOD=0, FOD=0, QOD=0, resolution=0.1, pulse_duration=15, chirp=0):
window = (np.sqrt(1+chirp**2)*8*np.log(2))/(pulse_duration**2)
lamend = (2*np.pi*C_LIGHT)/start
lamstart = (2*np.pi*C_LIGHT)/stop
lam = np.arange(lamstart, lamend+resolution, resolution)
omega = (2*np.pi*C_LIGHT)/lam
relom = omega-center
i_r = np.exp(-(relom)**2/(window))
i_s = np.exp(-(relom)**2/(window))
i = i_r + i_s + 2*np.sqrt(i_r*i_s)*np.cos(_disp(relom, GD=GD, GDD=GDD, TOD=TOD, FOD=FOD, QOD=QOD)+delay*omega)
#since the _disp polynomial is set to be zero, it's just cos(delay*omega)
return omega, i
def interpol(x,y):
''' Simple linear interpolation '''
xs = np.linspace(x[0], x[-1], len(x))
intp = interp1d(x, y, kind='linear', fill_value = 'extrapolate')
ys = intp(xs)
return xs, ys
def ifft_method(initSpectrumX, initSpectrumY, interpolate=True):
if len(initSpectrumY) > 0 and len(initSpectrumX) > 0:
Ydata = initSpectrumY
Xdata = initSpectrumX
else:
raise ValueError
N = len(Xdata)
if interpolate:
Xdata, Ydata = interpol(Xdata, Ydata)
# the (2*np.pi) division is because we have angular frequency, not frequency
xf = np.fft.fftfreq(N, d=(Xdata[1]-Xdata[0])/(2*np.pi)) * N * Xdata[-1]/(N-1)
yf = np.fft.ifft(Ydata)
else:
pass # some irrelevant code there
return xf, yf
def fft_method(initSpectrumX ,initSpectrumY):
if len(initSpectrumY) > 0 and len(initSpectrumX) > 0:
Ydata = initSpectrumY
Xdata = initSpectrumX
else:
raise ValueError
yf = np.fft.fft(Ydata)
xf = np.fft.fftfreq(len(Xdata), d=(Xdata[1]-Xdata[0])*2*np.pi)
# the problem is there, where I transform the x values.
xf = np.fft.ifftshift(xf)
return xf, yf
# the generated data
x, y = generator(1, 3, 2, delay = 1500, resolution = 0.1)
# plt.plot(x,y)
xx, yy = ifft_method(x,y)
#if the x values are correctly scaled, the two symmetrical spikes should appear exactly at delay value
# plt.plot(xx, np.abs(yy))
#do the cutting there, which is also irrelevant now
# the problem is there, in fft_method. The x values are not the same as before transforms.
xxx, yyy = fft_method(xx, yy)
plt.plot(xxx, np.abs(yyy))
#and it should look like this:
#xs = np.linspace(x[0], x[-1], len(x))
#plt.plot(xs, np.abs(yyy))
plt.grid()
plt.show()
I have measured data on a three dimensional grid, e.g. f(x, y, t). I want to interpolate and smooth this data in the direction of t with splines.
Currently, I do this with scipy.interpolate.UnivariateSpline:
import numpy as np
from scipy.interpolate import UnivariateSpline
# data is my measured data
# data.shape is (len(y), len(x), len(t))
data = np.arange(1000).reshape((5, 5, 40)) # just for demonstration
times = np.arange(data.shape[-1])
y = 3
x = 3
sp = UnivariateSpline(times, data[y, x], k=3, s=6)
However, I need the spline to have vanishing derivatives at t=0. Is there a way to enforce this constraint?
The best thing I can think of is to do a minimization with a constraint with scipy.optimize.minimize. It is pretty easy to take the derivative of a spline, so the constraint is simply. I would use a regular spline fit (UnivariateSpline) to get the knots (t), and hold the knots fixed (and degree k, of course), and vary the coefficients c. Maybe there is a way to vary the knot locations as well but I will leave that to you.
import numpy as np
from scipy.interpolate import UnivariateSpline, splev, splrep
from scipy.optimize import minimize
def guess(x, y, k, s, w=None):
"""Do an ordinary spline fit to provide knots"""
return splrep(x, y, w, k=k, s=s)
def err(c, x, y, t, k, w=None):
"""The error function to minimize"""
diff = y - splev(x, (t, c, k))
if w is None:
diff = np.einsum('...i,...i', diff, diff)
else:
diff = np.dot(diff*diff, w)
return np.abs(diff)
def spline_neumann(x, y, k=3, s=0, w=None):
t, c0, k = guess(x, y, k, s, w=w)
x0 = x[0] # point at which zero slope is required
con = {'type': 'eq',
'fun': lambda c: splev(x0, (t, c, k), der=1),
#'jac': lambda c: splev(x0, (t, c, k), der=2) # doesn't help, dunno why
}
opt = minimize(err, c0, (x, y, t, k, w), constraints=con)
copt = opt.x
return UnivariateSpline._from_tck((t, copt, k))
And then we generate some fake data that should have zero initial slope and test it:
import matplotlib.pyplot as plt
n = 10
x = np.linspace(0, 2*np.pi, n)
y0 = np.cos(x) # zero initial slope
std = 0.5
noise = np.random.normal(0, std, len(x))
y = y0 + noise
k = 3
sp0 = UnivariateSpline(x, y, k=k, s=n*std)
sp = spline_neumann(x, y, k, s=n*std)
plt.figure()
X = np.linspace(x.min(), x.max(), len(x)*10)
plt.plot(X, sp0(X), '-r', lw=1, label='guess')
plt.plot(X, sp(X), '-r', lw=2, label='spline')
plt.plot(X, sp.derivative()(X), '-g', label='slope')
plt.plot(x, y, 'ok', label='data')
plt.legend(loc='best')
plt.show()
Here is one way to do this. The basic idea is to get a spline's coefficients with splrep and then modify them before calling splev. The first few knots in the spline correspond to the lowest value in the range of x values. If the coefficients that correspond to them are set equal to each other, that completely flattens out the spline at that end.
Using the same data, times, x, y as in your example:
# set up example data
data = np.arange(1000).reshape((5, 5, 40))
times = np.arange(data.shape[-1])
y = 3
x = 3
# make 1D spline
import scipy.interpolate
from pylab import * # for plotting
knots, coefficients, degree = scipy.interpolate.splrep(times, data[y, x])
t = linspace(0,3,100)
plot( t, scipy.interpolate.splev(t, (knots, coefficients, degree)) )
# flatten out the beginning
coefficients[:2] = coefficients[0]
plot( t, scipy.interpolate.splev(t, (knots, coefficients, degree)) )
scatter( times, data[y, x] )
xlim(0,3)
ylim(720,723)
Blue: original points and spline through them. Green: modified spline with derivative=0 at the beginning. Both are zoomed in to the very beginning.
plot( t, scipy.interpolate.splev(t, (knots, coefficients, degree), der=1), 'g' )
xlim(0,3)
Call splev(..., der=1) to plot the first derivative. The derivative starts at zero and overshoots a little so the modified spline can catch up (this is inevitable).
The modified spline does not go through the first two points it is based on (it still hits all the other points exactly). It is possible to modify this by adding an extra interior control point next to the origin to get both a zero derivative and go through the original points; experiment with the knots and coefficients until it does what you want.
Your example does not work ( on python 2.7.9), so I only sketch my idea:
calculate sp
take the derivative via sp.derivative and evaluate it at the relevant times (probably the same times at which you measured your data)
Set the relevant points to zero (e.g. the value at t=0)
Calculate another spline from the derivative values.
Integrate your spline function. I guess you will have to do this numerically, but that should not be a problem. Do not forget to add a constant, to get your original function.
Python's curve_fit calculates the best-fit parameters for a function with a single independent variable, but is there a way, using curve_fit or something else, to fit for a function with multiple independent variables? For example:
def func(x, y, a, b, c):
return log(a) + b*log(x) + c*log(y)
where x and y are the independent variable and we would like to fit for a, b, and c.
You can pass curve_fit a multi-dimensional array for the independent variables, but then your func must accept the same thing. For example, calling this array X and unpacking it to x, y for clarity:
import numpy as np
from scipy.optimize import curve_fit
def func(X, a, b, c):
x,y = X
return np.log(a) + b*np.log(x) + c*np.log(y)
# some artificially noisy data to fit
x = np.linspace(0.1,1.1,101)
y = np.linspace(1.,2., 101)
a, b, c = 10., 4., 6.
z = func((x,y), a, b, c) * 1 + np.random.random(101) / 100
# initial guesses for a,b,c:
p0 = 8., 2., 7.
print(curve_fit(func, (x,y), z, p0))
Gives the fit:
(array([ 9.99933937, 3.99710083, 6.00875164]), array([[ 1.75295644e-03, 9.34724308e-05, -2.90150983e-04],
[ 9.34724308e-05, 5.09079478e-06, -1.53939905e-05],
[ -2.90150983e-04, -1.53939905e-05, 4.84935731e-05]]))
optimizing a function with multiple input dimensions and a variable number of parameters
This example shows how to fit a polynomial with a two dimensional input (R^2 -> R) by an increasing number of coefficients. The design is very flexible so that the callable f from curve_fit is defined once for any number of non-keyword arguments.
minimal reproducible example
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def poly2d(xy, *coefficients):
x = xy[:, 0]
y = xy[:, 1]
proj = x + y
res = 0
for order, coef in enumerate(coefficients):
res += coef * proj ** order
return res
nx = 31
ny = 21
range_x = [-1.5, 1.5]
range_y = [-1, 1]
target_coefficients = (3, 0, -19, 7)
xs = np.linspace(*range_x, nx)
ys = np.linspace(*range_y, ny)
im_x, im_y = np.meshgrid(xs, ys)
xdata = np.c_[im_x.flatten(), im_y.flatten()]
im_target = poly2d(xdata, *target_coefficients).reshape(ny, nx)
fig, axs = plt.subplots(2, 3, figsize=(29.7, 21))
axs = axs.flatten()
ax = axs[0]
ax.set_title('Unknown polynomial P(x+y)\n[secret coefficients: ' + str(target_coefficients) + ']')
sm = ax.imshow(
im_target,
cmap = plt.get_cmap('coolwarm'),
origin='lower'
)
fig.colorbar(sm, ax=ax)
for order in range(5):
ydata=im_target.flatten()
popt, pcov = curve_fit(poly2d, xdata=xdata, ydata=ydata, p0=[0]*(order+1) )
im_fit = poly2d(xdata, *popt).reshape(ny, nx)
ax = axs[1+order]
title = 'Fit O({:d}):'.format(order)
for o, p in enumerate(popt):
if o%2 == 0:
title += '\n'
if o == 0:
title += ' {:=-{w}.1f} (x+y)^{:d}'.format(p, o, w=int(np.log10(max(abs(p), 1))) + 5)
else:
title += ' {:=+{w}.1f} (x+y)^{:d}'.format(p, o, w=int(np.log10(max(abs(p), 1))) + 5)
title += '\nrms: {:.1f}'.format( np.mean((im_fit-im_target)**2)**.5 )
ax.set_title(title)
sm = ax.imshow(
im_fit,
cmap = plt.get_cmap('coolwarm'),
origin='lower'
)
fig.colorbar(sm, ax=ax)
for ax in axs.flatten():
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.show()
P.S. The concept of this answer is identical to my other answer here, but the code example is way more clear. At the time given, I will delete the other answer.
Fitting to an unknown numer of parameters
In this example, we try to reproduce some measured data measData.
In this example measData is generated by the function measuredData(x, a=.2, b=-2, c=-.8, d=.1). I practice, we might have measured measData in a way - so we have no idea, how it is described mathematically. Hence the fit.
We fit by a polynomial, which is described by the function polynomFit(inp, *args). As we want to try out different orders of polynomials, it is important to be flexible in the number of input parameters.
The independent variables (x and y in your case) are encoded in the 'columns'/second dimension of inp.
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def measuredData(inp, a=.2, b=-2, c=-.8, d=.1):
x=inp[:,0]
y=inp[:,1]
return a+b*x+c*x**2+d*x**3 +y
def polynomFit(inp, *args):
x=inp[:,0]
y=inp[:,1]
res=0
for order in range(len(args)):
print(14,order,args[order],x)
res+=args[order] * x**order
return res +y
inpData=np.linspace(0,10,20).reshape(-1,2)
inpDataStr=['({:.1f},{:.1f})'.format(a,b) for a,b in inpData]
measData=measuredData(inpData)
fig, ax = plt.subplots()
ax.plot(np.arange(inpData.shape[0]), measData, label='measuered', marker='o', linestyle='none' )
for order in range(5):
print(27,inpData)
print(28,measData)
popt, pcov = curve_fit(polynomFit, xdata=inpData, ydata=measData, p0=[0]*(order+1) )
fitData=polynomFit(inpData,*popt)
ax.plot(np.arange(inpData.shape[0]), fitData, label='polyn. fit, order '+str(order), linestyle='--' )
ax.legend( loc='upper left', bbox_to_anchor=(1.05, 1))
print(order, popt)
ax.set_xticklabels(inpDataStr, rotation=90)
Result:
Yes. We can pass multiple variables for curve_fit. I have written a piece of code:
import numpy as np
x = np.random.randn(2,100)
w = np.array([1.5,0.5]).reshape(1,2)
esp = np.random.randn(1,100)
y = np.dot(w,x)+esp
y = y.reshape(100,)
In the above code I have generated x a 2D data set in shape of (2,100) i.e, there are two variables with 100 data points. I have fit the dependent variable y with independent variables x with some noise.
def model_func(x,w1,w2,b):
w = np.array([w1,w2]).reshape(1,2)
b = np.array([b]).reshape(1,1)
y_p = np.dot(w,x)+b
return y_p.reshape(100,)
We have defined a model function that establishes relation between y & x.
Note: The shape of output of the model function or predicted y should be (length of x,)
popt, pcov = curve_fit(model_func,x,y)
The popt is an 1D numpy array containing predicted parameters. In our case there are 3 parameters.
Yes, there is: simply give curve_fit a multi-dimensional array for xData.
I solve a differential equation with vector inputs
y' = f(t,y), y(t_0) = y_0
where y0 = y(x)
using the explicit Euler method, which says that
y_(i+1) = y_i + h*f(t_i, y_i)
where t is a time vector, h is the step size, and f is the right-hand side of the differential equation.
The python code for the method looks like this:
for i in np.arange(0,n-1):
y[i+1,...] = y[i,...] + dt*myode(t[i],y[i,...])
The result is a k,m matrix y, where k is the size of the t dimension, and m is the size of y.
The vectors y and t are returned.
t, x, and y are passed to scipy.interpolate.RectBivariateSpline(t, x, y, kx=1, ky=1):
g = scipy.interpolate.RectBivariateSpline(t, x, y, kx=1, ky=1)
The resulting object g takes new vectors ti,xi ( g(p,q) ) to give y_int, which is y interpolated at the points defined by ti and xi.
Here is my problem:
The documentation for RectBivariateSpline describes the __call__ method in terms of x and y:
__call__(x, y[, mth]) Evaluate spline at the grid points defined by the coordinate arrays
The matplotlib documentation for plot_surface uses similar notation:
Axes3D.plot_surface(X, Y, Z, *args, **kwargs)
with the important difference that X and Y are 2D arrays which are generated by numpy.meshgrid().
When I compute simple examples, the input order is the same in both and the result is exactly what I would expect. In my explicit Euler example, however, the initial order is ti,xi, yet the surface plot of the interpolant output only makes sense if I reverse the order of the inputs, like so:
ax2.plot_surface(xi, ti, u, cmap=cm.coolwarm)
While I am glad that it works, I'm not satisfied because I cannot explain why, nor why (apart from the array geometry) it is necessary to swap the inputs. Ideally, I would like to restructure the code so that the input order is consistent.
Here is a working code example to illustrate what I mean:
# Heat equation example with explicit Euler method
import numpy as np
import matplotlib.pyplot as mplot
import matplotlib.cm as cm
import scipy.sparse as sp
import scipy.interpolate as interp
from mpl_toolkits.mplot3d import Axes3D
import pdb
# explicit Euler method
def eev(myode,tspan,y0,dt):
# Preprocessing
# Time steps
tspan[1] = tspan[1] + dt
t = np.arange(tspan[0],tspan[1],dt,dtype=float)
n = t.size
m = y0.shape[0]
y = np.zeros((n,m),dtype=float)
y[0,:] = y0
# explicit Euler recurrence relation
for i in np.arange(0,n-1):
y[i+1,...] = y[i,...] + dt*myode(t[i],y[i,...])
return y,t
# generate matrix A
# u'(t) = A*u(t) + g*u(t)
def a_matrix(n):
aa = sp.diags([1, -2, 1],[-1,0,1],(n,n))
return aa
# System of ODEs with finite differences
def f(t,u):
dydt = np.divide(1,h**2)*A.dot(u)
return dydt
# homogenous Dirichlet boundary conditions
def rbd(t):
ul = np.zeros((t,1))
return ul
# Initial value problem -----------
def main():
# Metal rod
# spatial discretization
# number of inner nodes
m = 20
x0 = 0
xn = 1
x = np.linspace(x0,xn,m+2)
# Step size
global h
h = x[1]-x[0]
# Initial values
u0 = np.sin(np.pi*x)
# A matrix
global A
A = a_matrix(m)
# Time
t0 = 0
tend = 0.2
# Time step width
dt = 0.0001
tspan = [t0,tend]
# Test r for stability
r = np.divide(dt,h**2)
if r <= 0.5:
u,t = eev(f,tspan,u0[1:-1],dt)
else:
print('r = ',r)
print('r > 0.5. Explicit Euler method will not be stable.')
# Add boundary values back
rb = rbd(t.size)
u = np.hstack((rb,u,rb))
# Interpolate heat values
# Create interpolant. Note the parameter order
fi = interp.RectBivariateSpline(t, x, u, kx=1, ky=1)
# Create vectors for interpolant
xi = np.linspace(x[0],x[-1],100)
ti = np.linspace(t0,tend,100)
# Compute function values from interpolant
u_int = fi(ti,xi)
# Change xi, ti in to 2D arrays
xi,ti = np.meshgrid(xi,ti)
# Create figure and axes objects
fig3 = mplot.figure(1)
ax3 = fig3.gca(projection='3d')
print('xi.shape =',xi.shape,'ti.shape =',ti.shape,'u_int.shape =',u_int.shape)
# Plot surface. Note the parameter order, compare with interpolant!
ax3.plot_surface(xi, ti, u_int, cmap=cm.coolwarm)
ax3.set_xlabel('xi')
ax3.set_ylabel('ti')
main()
mplot.show()
As I can see you define :
# Change xi, ti in to 2D arrays
xi,ti = np.meshgrid(xi,ti)
Change this to :
ti,xi = np.meshgrid(ti,xi)
and
ax3.plot_surface(xi, ti, u_int, cmap=cm.coolwarm)
to
ax3.plot_surface(ti, xi, u_int, cmap=cm.coolwarm)
and it works fine (if I understood well ).