Python arguments for maximum - python

i have this code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
def rownanie(Y, t, l, q, a, u):
y1, y2, z1, z2 = Y
dydt = [y2, ((l*q)/a)*(1/y1)*(1-z2*u), z2, (a*y2*u)/y1]
return dydt
l = 100
q = 1
a = 10
u = 0.25
y0 = -1
z0 = 0
y0_prim, z0_prim = 0, 0
t = np.linspace(0, 100, 10001)
sol = odeint(rownanie, [y0, y0_prim, z0, z0_prim], t, args=(l,q,a,u))
print(sol)
plt.plot(sol[:, 0], sol[:, 2])
plt.xlabel('Y')
plt.ylabel('Z')
plt.grid()
So i have 4 columns of data, lets say [:, 0] till [:,0]. I have to focus only on two : [:, 0] , [:, 2]. When i make a graph of it - its a harmonic function. [:, 0] are values , [:, 2] are arguments. I need to find these arguments for which values are max. Or i need the difference, the distance beetween two arguments (two maxes) I tried with "if", but the values are approximations so they are not the same. Could you help me with this one?

You were right, you need to define a tolerance for the difference with respect to the maximum value. I marked the points for clarification. The idea here is to first get the difference from the maximum of values max(sol[:, 0]). Then you can use the NumPy array's indexing using a tolerance of 1e-4. [abs(diff) < 1e-4] returns your indices where this condition holds True. Now you have these maximum 5 points. You can do whatever processing you want with them. The choice of tolerance will depend also on your number of mesh points (10001 in this case). It requires some playing around. One can also write some function to check this smartly.
diff = sol[:, 0] - max(sol[:, 0])
plt.plot(sol[:, 0], sol[:, 2])
plt.plot(sol[:, 0][abs(diff) < 1e-4], sol[:, 2][abs(diff) < 1e-4], 'kx')

Graph
And i need to find this difference but every maximum is a little bit different

Related

How to get data points after plot it? (Python)

I just used scipy.odeint to solve a diff_equation system, and use matplotlib to plot it. I got the graphs. My question is can I get some specific data points, like when t = 1, what is x1, x2, x3. I need when t = 1,2,3,4..., what value of concentration is. Thank you.
import matplotlib.pyplot as plt
from scipy.integrate import odeint
Dose = 100
V = 43.8
k12 = 1.2 # rate of central -> peripheral
k21 = 1.4 # rate of peripheral -> central
kel = 0.20 # rate of excrete from plasma
def diff(d_list, t):
x1, x2, x3, = d_list
# X1(t), X2(t), X3(t)
return np.array([(-k12*x1-kel*x1+k21*x2),
(k12*x1-k21*x2),
(kel*x1)])
t = np.linspace(0, 24, 960)
result = odeint(diff, [(Dose/V), 0, 0], t)
plt.plot(t, result[:, 0], label='x1: central')
plt.plot(t, result[:, 1], label='x2: tissue')
plt.plot(t, result[:, 2], label='x3: excreted')
plt.legend()
plt.xlabel('t (hr)')
plt.ylabel('Concentration (mg/L)')
plt.show()
This is not related to matplotlib or scipy. You can either interpolate or get the closest data point.
Interpolated value
If you need to get the x1, x2 and x3 for values of t which do not correspond to a data point (you mentioned 1,2,3,4 which are not in your t array), you will need to interpolate. To get x1, x2 and x3 at t=1, you can do (at the end of your script):
valuesAt1 = [np.interp(1, t, result[:,col]) for col in range(result.shape[1])]
The output of print(valuesAt1) is then:
[1.1059703843218311, 0.8813129004034452, 0.2958217381057726]
If you only need x1, just do
valuesAt1 = np.interp(1, t, result[:,0])
then, the output of print(valuesAt1) is:
1.1059703843218311
Closest data point
If you do not want to do interpolation but want the value of x1, x2 and x3 for the value of the t array element which is the closest from 1, do:
valuesAtClosestPointFrom1 = result[ np.argmin(np.abs(t-1))]
The output from print(valuesAtClosestPointFrom1) is:
[1.10563546 0.88141641 0.29605315]
This can be done by interpolation and using scipy.interpolate.InterpolatedUnivariateSpline as follows:
from scipy.interpolate import InterpolatedUnivariateSpline
splx1 = InterpolatedUnivariateSpline(t, result[:,0])
splx2 = InterpolatedUnivariateSpline(t, result[:,1])
splx3 = InterpolatedUnivariateSpline(t, result[:,2])
Firstly, you need to pass the x and y data that you want to interpolate. Secondly, create a list for x for which you want the desired values of y.
import numpy as np
desired_time = np.arange(1,25)
x1 = splx1(desired_time)
x2 = splx2(desired_time)
x3 = splx3(desired_time)
Lastly, pass it to the respective spline object to get the desired values. For example, a desired_time array from 1 to 24 using np.arange is created and passed to the spline objects in the example above.

In python, how to discretize continuous variable using accuracy as a criterion taking class into consideration

For a set of subjects I have a continuous variable with range 0-100 representing a quantification of a subject's state cont_attribute. For each subject I also have an ordinal variable representing reader annotation of subject's state as one of four states (e.g. 1, 2, 3, 4) class_label. Values for cont_attribute overlap between classes. My goal is to discretize cont_attribute so that agreement with class is optimized.
When discretizing cont_attribute, arbitrary thresholds x1, x2, x3 can be applied to the continuous variable directly, to yield bins of four ordinal categories and agreement with reader annotation class can be assessed:
cohen_kappa_score((pd.cut(df['cont_attribute'],bins=[0, x1, x2, x3, 100], labels=['1','2','3','4']).astype('int'))
, df['class_label'].astype('int'))
I have found several options for discretizatin of continuous variable such as Jenks natural breaks and sklearn Kmeans, though these options do not take into account class.
What I tried:
I attempted to optimize the function above to yield the maximal value using scipy.optimize.minimize. Here for each threshold between two classes, I use the minimum value of the larger class and the maximal value of the smaller classes as a range with which to find the respective optimal cutoff point between those classes. With this approach I run into a problem, prompting:
ValueError: bins must increase monotonically.
def objfunc(grid):
x1, x2, x3 = grid
return (cohen_kappa_score((pd.cut(df.cont_attribute,bins=[0, x1, x2, x3, 100],labels=['1','2','3','4'], duplicates='drop').astype('int'))
, df['class_label'].astype('int'))) * (-1);
grid = (slice(df[(df['class_label'] == 2)]['cont_attribute'].min(), df[(df['class_label'] == 1)]['cont_attribute'].max(), 0.5), (slice(df[(df['class_label'] == 3)]['cont_attribute'].min(), df[(df['class_label'] == 2)]['cont_attribute'].max(), 0.5), (slice(df[(df['class_label'] == 4)]['cont_attribute'].min(), df[(df['class_label'] == 3)]['cont_attribute'].max(), 0.5))
solution = brute(objfunc, grid, finish=None,full_output = True)
solution
In python, is there a straightforward way to optimize thresholds x1, x2, x3 taking agreement with class into account (supervised discretization)? Alternatively, how can the above function be rewritten to yield a maximum using scipy.optimize.minimize?
The error message is not too hard. The pandas cut method demands that the cut vector
[0,x1,x2,x3,100] is strictly monotinic. By having some mechanism to make sure that no invalid values are passed to the cut function, we are safe. That is what I implemented below. To denote an invalid setting, it is customary to use np.inf since all other values are lower. Therefore, every minizmier would say such an invalid is undesirable as a solution. See below for the implementation. I also included all the imports and some data generation, so that it is simple to use the code. Please do so in future questions as well.
You might want to use more than 10 bins per dimension in the brute force search.
Also - the code is quite inefficient. Since it brute forces over all combinations of x1, x2, x3, but a lot of them are invalid (e.g. x2<=x1), you might want to parametrize the problem in (x1,x2-x1, x3-x2) instead, and search over nonnegative values in the second and third component.
Finally, the brute method is a minimizer, so you should return -cohen_kappa from the objective
#%%
import numpy as np
from sklearn.metrics import cohen_kappa_score, confusion_matrix
from scipy.stats import truncnorm
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.optimize import brute
#
# Generate Data
#
n = 1000
np.random.seed(0)
y = np.random.choice(4, p=[0.1, 0.3, 0.4, 0.2], size=n)
x = np.zeros(n)
for i in range(5):
low = 0
high = 100
mymean = 20 * i
myscale = 8
a, b = (low - mymean) / myscale, (high - mymean) / myscale
x[y == i] = truncnorm.rvs(a=a, b=b, loc=mymean, scale=myscale, size=np.sum(y == i))
data = pd.DataFrame({"cont_attribute": x, "class_label": y})
# make a loss function that accounts for the bad orderings
def loss(cuts):
x1, x2, x3 = cuts
if 0 >= x1 or x1 >= x2 or x2 >= x3 or x3 >= 100:
return np.inf
yhat = pd.cut(
data["cont_attribute"],
bins=[0, x1, x2, x3, 100],
labels=[0, 1, 2, 3],
# duplicates="drop",
).astype("int")
return -cohen_kappa_score(data["class_label"], yhat)
# Compute the result via brute force
ranges = [(0, 100)] * 3
Ns=30
result = brute(func=loss, ranges=ranges, Ns=Ns)
print(result)
print(-loss(result))
# Evaluate the final result in a confusion matrix
x1, x2, x3 = result
data["class_pred"] = pd.cut(
data["cont_attribute"],
bins=[0, x1, x2, x3, 100],
labels=[0, 1, 2, 3],
duplicates="drop",
).astype("int")
mat = confusion_matrix(y_true=data['class_label'],y_pred=data['class_pred'])
plt.matshow(mat)
# Loop over data dimensions and create text annotations.
for i in range(4):
for j in range(4):
text = plt.text(j, i, mat[i, j],
ha="center", va="center", color="grey")
plt.xlabel('Predicted class')
plt.ylabel('True class')
plt.show()
# Evaluate result graphically
# inspect the data
fig,ax = plt.subplots(2,1)
sns.histplot(data=data, x="cont_attribute", hue="class_label",ax=ax[0],multiple='stack')
sns.histplot(data=data, x="cont_attribute", hue="class_pred",ax=ax[1],multiple='stack')
plt.show()
Regarding the use of scipy.optimize.minimize, that is not possible when using the cohen kappa as a ojective. Since it is not differentiable, it is not so easy to optimize over. Consider using a cross entropy loss function instead. But in that case, you would need a (parametric) model for the classification task.
A standard ordinal classifier is available in the ordinal regression package in statsmodels. It will be vastly faster than the brute method, but possibly less accurate when evaluated on cohen kappa. Going that route is probably what I would have done if going for a higher number of bins.

How can I compute the Pearson correlation matrix and retain only significant values?

I have a 4-by-3 matrix, X, and wish to form the 3-by-3 Pearson correlation matrix, C, obtained by computing correlations between all 3 possible column combinations of X. However, entries of C that correspond to correlations that aren't statistically significant should be set to zero.
I know how to get pair-wise correlations and significance values using pearsonr in scipy.stats. For example,
import numpy as np
from scipy.stats.stats import pearsonr
X = np.array([[1, 1, -2], [0, 0, 0], [0, .2, 1], [5, 3, 4]])
pearsonr(X[:, 0], X[:, 1])
returns (0.9915008164289165, 0.00849918357108348), a correlation of about .9915 between columns one and two of X, with p-value .0085.
I could easily get my desired matrix using nested loops:
Pre-populate C as a 3-by-3 matrix of zeros.
Each pass of the nested loop will correspond to two columns of X. The entry of C corresponding to this pair of columns will be set to the pairwise correlation provided the p-value is less than or equal to my threshold, say .01.
I'm wondering if there's a simpler way. I know in Pandas, I can create the correlation matrix, C, in basically one line:
import pandas as pd
df = pd.DataFrame(data=X)
C_frame = df.corr(method='pearson')
C = C_frame.to_numpy()
Is there a way to get the matrix or data frame of p-values, P, without a loop? If so, how could I set each entry of C to zero should the corresponding p-value in P exceed my threshold?
Looking through the docs for pearsonr reveals the fomulae used to compute the correlations. It should not be too difficult to get the correlations between each column of a matrix using vectorization.
While you could compute the value of C using pandas, I will show pure numpyan implementation for the entire process.
First, compute the r-values:
X = np.array([[1, 1, -2],
[0, 0, 0],
[0, .2, 1],
[5, 3, 4]])
n = X.shape[0]
X -= X.mean(axis=0)
s = (X**2).sum(axis=0)
r = (X[..., None] * X[..., None, :]).sum(axis=0) / np.sqrt(s[:, None] * s[None, :])
Computing the p values is made simple given the existence of the beta distribution in scipy. Taken directly from the docs:
dist = scipy.stats.beta(n/2 - 1, n/2 - 1, loc=-1, scale=2)
p = 2 * dist.cdf(-abs(r))
You can trivially make a mask from p with your threshold, and apply it to r to make C:
mask = (p <= 0.01)
C = np.zeros_like(r)
C[mask] = r[mask]
A better option would probably be to modify your r in-place:
r[p > 0.1] = 0
In function form:
def non_trivial_correlation(X, threshold=0.1):
n = X.shape[0]
X = X - X.mean(axis=0) # Don't modify the original
x = (X**2).sum(axis=0)
r = (X[..., None] * X[..., None, :]).sum(axis=0) / np.sqrt(s[:, None] * s[None, :])
p = 2 * scipy.stats.beta(n/2 - 1, n/2 - 1, loc=-1, scale=2).cdf(-abs(r))
r[p > threshold] = 0
return r

2D Gaussian Fit for intensities at certain coordinates in Python

I have a set of coordinates (x, y, z(x, y)) which describe intensities (z) at coordinates x, y. For a set number of these intensities at different coordinates, I need to fit a 2D Gaussian that minimizes the mean squared error.
The data is in numpy matrices and for each fitting session I will have either 4, 9, 16 or 25 coordinates. Ultimately I just need to get the central position of the gaussian (x_0, y_0) that has smallest MSE.
All of the examples that I have found use scipy.optimize.curve_fit but the input data they have is over an entire mesh rather than a few coordinates.
Any help would be appreciated.
Introduction
There are multiple ways to approach this. You can use non-linear methods (e.g. scipy.optimize.curve_fit), but they'll be slow and aren't guaranteed to converge. You can linearize the problem (fast, unique solution), but any noise in the "tails" of the distribution will cause issues. There are actually a few tricks you can apply to this particular case to avoid the latter issue. I'll show some examples, but I don't have time to demonstrate all of the "tricks" right now.
Just as a side note, a general 2D guassian has 6 parameters, so you won't be able to fully fit things with 4 points. However, it sounds like you might be assuming that there's no covariance between x and y and that the variances are the same in each direction (i.e. a perfectly "round" bell curve). If that's the case, then you only need four parameters. If you know the amplitude of the guassian, you'll only need three. However, I'm going to start with the general solution, and you can simplify it later on, if you want to.
For the moment, let's focus on solving this problem using non-linear methods (e.g. scipy.optimize.curve_fit).
The general equation for a 2D guassian is (directly from wikipedia):
where:
is essentially 0.5 over the covariance matrix, A is the amplitude,
and (X₀, Y₀) is the center
Generate simplified sample data
Let's write the equation above out:
import numpy as np
import matplotlib.pyplot as plt
def gauss2d(x, y, amp, x0, y0, a, b, c):
inner = a * (x - x0)**2
inner += 2 * b * (x - x0)**2 * (y - y0)**2
inner += c * (y - y0)**2
return amp * np.exp(-inner)
And then let's generate some example data. To start with, we'll generate some data that will be easy to fit:
np.random.seed(1977) # For consistency
x, y = np.random.random((2, 10))
x0, y0 = 0.3, 0.7
amp, a, b, c = 1, 2, 3, 4
zobs = gauss2d(x, y, amp, x0, y0, a, b, c)
fig, ax = plt.subplots()
scat = ax.scatter(x, y, c=zobs, s=200)
fig.colorbar(scat)
plt.show()
Note that we haven't added any noise, and the center of the distribution is within the range that we have data (i.e. center at 0.3, 0.7 and a scatter of x,y observations between 0 and 1). For the moment, let's stick with this, and then we'll see what happens when we add noise and shift the center.
Non-linear fitting
To start with, let's use scpy.optimize.curve_fit to preform a non-linear least-squares fit to the gaussian function. (On a side note, you can play around with the exact minimization algorithm by using some of the other functions in scipy.optimize.)
The scipy.optimize functions expect a slightly different function signature than the one we originally wrote above. We could write a wrapper to "translate", but let's just re-write the gauss2d function instead:
def gauss2d(xy, amp, x0, y0, a, b, c):
x, y = xy
inner = a * (x - x0)**2
inner += 2 * b * (x - x0)**2 * (y - y0)**2
inner += c * (y - y0)**2
return amp * np.exp(-inner)
All we did was have the function expect the independent variables (x & y) as a single 2xN array.
Now we need to make an initial guess at what the guassian curve's parameters actually are. This is optional (the default is all ones, if I recall correctly), but you're likely to have problems converging if 1, 1 is not particularly close to the "true" center of the gaussian curve. For that reason, we'll use the x and y values of our largest observed z-value as a starting point for the center. I'll leave the rest of the parameters as 1, but if you know that they're likely to consistently be significantly different, change them to something more reasonable.
Here's the full, stand-alone example:
import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as plt
def main():
x0, y0 = 0.3, 0.7
amp, a, b, c = 1, 2, 3, 4
true_params = [amp, x0, y0, a, b, c]
xy, zobs = generate_example_data(10, true_params)
x, y = xy
i = zobs.argmax()
guess = [1, x[i], y[i], 1, 1, 1]
pred_params, uncert_cov = opt.curve_fit(gauss2d, xy, zobs, p0=guess)
zpred = gauss2d(xy, *pred_params)
print 'True parameters: ', true_params
print 'Predicted params:', pred_params
print 'Residual, RMS(obs - pred):', np.sqrt(np.mean((zobs - zpred)**2))
plot(xy, zobs, pred_params)
plt.show()
def gauss2d(xy, amp, x0, y0, a, b, c):
x, y = xy
inner = a * (x - x0)**2
inner += 2 * b * (x - x0)**2 * (y - y0)**2
inner += c * (y - y0)**2
return amp * np.exp(-inner)
def generate_example_data(num, params):
np.random.seed(1977) # For consistency
xy = np.random.random((2, num))
zobs = gauss2d(xy, *params)
return xy, zobs
def plot(xy, zobs, pred_params):
x, y = xy
yi, xi = np.mgrid[:1:30j, -.2:1.2:30j]
xyi = np.vstack([xi.ravel(), yi.ravel()])
zpred = gauss2d(xyi, *pred_params)
zpred.shape = xi.shape
fig, ax = plt.subplots()
ax.scatter(x, y, c=zobs, s=200, vmin=zpred.min(), vmax=zpred.max())
im = ax.imshow(zpred, extent=[xi.min(), xi.max(), yi.max(), yi.min()],
aspect='auto')
fig.colorbar(im)
ax.invert_yaxis()
return fig
main()
In this case, we exactly(ish) recover our original "true" parameters.
True parameters: [1, 0.3, 0.7, 2, 3, 4]
Predicted params: [ 1. 0.3 0.7 2. 3. 4. ]
Residual, RMS(obs - pred): 1.01560615193e-16
As we'll see in a second, this won't always be the case...
Adding Noise
Let's add some noise to our observations. All I've done here is change the generate_example_data function:
def generate_example_data(num, params):
np.random.seed(1977) # For consistency
xy = np.random.random((2, num))
noise = np.random.normal(0, 0.3, num)
zobs = gauss2d(xy, *params) + noise
return xy, zobs
However, the result looks quite different:
And as far as the parameters go:
True parameters: [1, 0.3, 0.7, 2, 3, 4]
Predicted params: [ 1.129 0.263 0.750 1.280 32.333 10.103 ]
Residual, RMS(obs - pred): 0.152444640098
The predicted center hasn't changed much, but the b and c parameters have changed quite a bit.
If we change the center of the function to somewhere slightly outside of our scatter of points:
x0, y0 = -0.3, 1.1
We'll wind up with complete nonsense as a result in the presence of noise! (It still works correctly without noise.)
True parameters: [1, -0.3, 1.1, 2, 3, 4]
Predicted params: [ 0.546 -0.939 0.857 -0.488 44.069 -4.136]
Residual, RMS(obs - pred): 0.235664449826
This is a common problem when fitting a function that decays to zero. Any noise in the "tails" can result in a very poor result. There are a number of strategies to deal with this. One of the easiest is to weight the inversion by the observed z-values. Here's an example for the 1D case: (focusing on linearized the problem) How can I perform a least-squares fitting over multiple data sets fast? If I have time later, I'll add an example of this for the 2D case.

Second order gradient in numpy

I am trying to calculate the 2nd-order gradient numerically of an array in numpy.
a = np.sin(np.arange(0, 10, .01))
da = np.gradient(a)
dda = np.gradient(da)
This is what I come up. Is the the way it should be done?
I am asking this, because in numpy there isn't an option saying np.gradient(a, order=2). I am concerned about whether this usage is wrong, and that is why numpy does not have this implemented.
PS1: I do realize that there is np.diff(a, 2). But this is only single-sided estimation, so I was curious why np.gradient does not have a similar keyword.
PS2: The np.sin() is a toy data - the real data does not have an analytic form.
Thank you!
I'll second #jrennie's first sentence - it can all depend. The numpy.gradient function requires that the data be evenly spaced (although allows for different distances in each direction if multi-dimensional). If your data does not adhere to this, than numpy.gradient isn't going to be much use. Experimental data may have (OK, will have) noise on it, in addition to not necessarily being all evenly spaced. In this case it might be better to use one of the scipy.interpolate spline functions (or objects). These can take unevenly spaced data, allow for smoothing, and can return derivatives up to k-1 where k is the order of the spline fit requested. The default value for k is 3, so a second derivative is just fine.
Example:
spl = scipy.interpolate.splrep(x,y,k=3) # no smoothing, 3rd order spline
ddy = scipy.interpolate.splev(x,spl,der=2) # use those knots to get second derivative
The object oriented splines like scipy.interpolate.UnivariateSpline have methods for the derivatives. Note that the derivative methods are implemented in Scipy 0.13 and are not present in 0.12.
Note that, as pointed out by #JosephCottham in comments in 2018, this answer (good for Numpy 1.08 at least), is no longer applicable since (at least) Numpy 1.14. Check your version number and the available options for the call.
There's no universal right answer for numerical gradient calculation. Before you can calculate the gradient about sample data, you have to make some assumption about the underlying function that generated that data. You can technically use np.diff for gradient calculation. Using np.gradient is a reasonable approach. I don't see anything fundamentally wrong with what you are doing---it's one particular approximation of the 2nd derivative of a 1-D function.
The double gradient approach fails for discontinuities in the first derivative.
As the gradient function takes one data point to the left and to the right into account, this continues/spreads when applying it multiple times.
On the other hand side, the second derivative can be calculated by the formula
d^2 f(x[i]) / dx^2 = (f(x[i-1]) - 2*f(x[i]) + f(x[i+1])) / h^2
compare here. This has the advantage to just take the two neighboring pixels into account.
In the picture the double np.gradient approach (left) and the above mentioned formula (right), as implemented by np.diff are compared. As f(x) has only one kink at zero, the second derivative (green) should only there have a peak.
As the double gradient solution takes 2 neighboring points in each direction into account, this leads to finite second derivative values at +/- 1.
In some cases, however, you may want to prefer the double gradient solution, as this is more robust to noise.
I am not sure why there is np.gradient and np.diff, but a reason might be, that the second argument of np.gradient defines the pixel distance (for each dimension) and for images it can be applied for both dimensions simultaneously gy, gx = np.gradient(a).
Code
import numpy as np
import matplotlib.pyplot as plt
xs = np.arange(-5,6,1)
f = np.abs(xs)
f_x = np.gradient(f)
f_xx_bad = np.gradient(f_x)
f_xx_good = np.diff(f, 2)
test = f[:-2] - 2* f[1:-1] + f[2:]
# lets plot all this
fig, axs = plt.subplots(1, 2, figsize=(9, 3), sharey=True)
ax = axs[0]
ax.set_title('bad: double gradient')
ax.plot(xs, f, marker='o', label='f(x)')
ax.plot(xs, f_x, marker='o', label='d f(x) / dx')
ax.plot(xs, f_xx_bad, marker='o', label='d^2 f(x) / dx^2')
ax.legend()
ax = axs[1]
ax.set_title('good: diff with n=2')
ax.plot(xs, f, marker='o', label='f(x)')
ax.plot(xs, f_x, marker='o', label='d f(x) / dx')
ax.plot(xs[1:-1], f_xx_good, marker='o', label='d^2 f(x) / dx^2')
ax.plot(xs[1:-1], test, marker='o', label='test', markersize=1)
ax.legend()
As I keep stepping over this problem in one form or the other again and again, I decided to write a function gradient_n, which adds an differentiation oder functionality to np.gradient. Not all functionalities of np.gradient are supported, like differentiation of mutiple axis.
Like np.gradient, gradient_n returns the differentiated result in the same shape as the input. Also a pixel distance argument (d) is supported.
import numpy as np
def gradient_n(arr, n, d=1, axis=0):
"""Differentiate np.ndarray n times.
Similar to np.diff, but additional support of pixel distance d
and padding of the result to the same shape as arr.
If n is even: np.diff is applied and the result is zero-padded
If n is odd:
np.diff is applied n-1 times and zero-padded.
Then gradient is applied. This ensures the right output shape.
"""
n2 = int((n // 2) * 2)
diff = arr
if n2 > 0:
a0 = max(0, axis)
a1 = max(0, arr.ndim-axis-1)
diff = np.diff(arr, n2, axis=axis) / d**n2
diff = np.pad(diff, tuple([(0,0)]*a0 + [(1,1)] +[(0,0)]*a1),
'constant', constant_values=0)
if n > n2:
assert n-n2 == 1, 'n={:f}, n2={:f}'.format(n, n2)
diff = np.gradient(diff, d, axis=axis)
return diff
def test_gradient_n():
import matplotlib.pyplot as plt
x = np.linspace(-4, 4, 17)
y = np.linspace(-2, 2, 9)
X, Y = np.meshgrid(x, y)
arr = np.abs(X)
arr_x = np.gradient(arr, .5, axis=1)
arr_x2 = gradient_n(arr, 1, .5, axis=1)
arr_xx = np.diff(arr, 2, axis=1) / .5**2
arr_xx = np.pad(arr_xx, ((0, 0), (1, 1)), 'constant', constant_values=0)
arr_xx2 = gradient_n(arr, 2, .5, axis=1)
assert np.sum(arr_x - arr_x2) == 0
assert np.sum(arr_xx - arr_xx2) == 0
fig, axs = plt.subplots(2, 2, figsize=(29, 21))
axs = np.array(axs).flatten()
ax = axs[0]
ax.set_title('x-cut')
ax.plot(x, arr[0, :], marker='o', label='arr')
ax.plot(x, arr_x[0, :], marker='o', label='arr_x')
ax.plot(x, arr_x2[0, :], marker='x', label='arr_x2', ls='--')
ax.plot(x, arr_xx[0, :], marker='o', label='arr_xx')
ax.plot(x, arr_xx2[0, :], marker='x', label='arr_xx2', ls='--')
ax.legend()
ax = axs[1]
ax.set_title('arr')
im = ax.imshow(arr, cmap='bwr')
cbar = ax.figure.colorbar(im, ax=ax, pad=.05)
ax = axs[2]
ax.set_title('arr_x')
im = ax.imshow(arr_x, cmap='bwr')
cbar = ax.figure.colorbar(im, ax=ax, pad=.05)
ax = axs[3]
ax.set_title('arr_xx')
im = ax.imshow(arr_xx, cmap='bwr')
cbar = ax.figure.colorbar(im, ax=ax, pad=.05)
test_gradient_n()
This is an excerpt from the original documentation (at the time of writing found at http://docs.scipy.org/doc/numpy/reference/generated/numpy.gradient.html). It states that unless the sampling distance is 1 you need to include a list containing the distances as an argument.
numpy.gradient(f, *varargs, **kwargs)
Return the gradient of an N-dimensional array.
The gradient is computed using second order accurate central differences in the interior and either first differences or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array.
Parameters:
f : array_like
An N-dimensional array containing samples of a scalar function.
varargs : list of scalar, optional
N scalars specifying the sample distances for each dimension, i.e. dx, dy, dz, ... Default distance: 1.
edge_order : {1, 2}, optional
Gradient is calculated using Nth order accurate differences at the boundaries. Default: 1.
New in version 1.9.1.
Returns:
gradient : ndarray
N arrays of the same shape as f giving the derivative of f with respect to each dimension.
My solution is to create a function similar to np.gradient that calculates the 2nd derivatives numerically from the array data.
import numpy as np
def gradient2_even(y, h=None, edge_order=1):
"""
Return the 2nd-order gradient i.e.
2nd derivatives of y with n samples and k components.
The 2nd-order gradient is computed using second-order-accurate central differences
in the interior points and either first or second order accurate one-sided
(forward or backwards) differences at the boundaries.
The returned gradient hence has the same shape as the input array.
Parameters
----------
y : 1d or 2d array_like
The array containing the samples. If 2d with shape (n,k),
n is the number of samples at least 2 while k is the number of
y series/components. 1d input is equivalent to 2d input with shape (n,1).
h : constant or 1d, optional
spacing between the y samples. Default unitary spacing for
all y components. Spacing can be specified using:
1. Single scalar spacing value for all y components.
2. 1d array_like of length k specifying the spacing for each y component
edge_order : {1, 2}, optional
Order 1 means 3-point forward/backward finite differences
are used to calculate the 2nd derivatves at the edge points while
order 2 uses 4-point forward/backward finite differences.
Returns
----------
d2y : 1d or 2d array
Array containing the 2nd derivatives. The output shape is the same as y.
"""
if edge_order!=1 and edge_order!=2:
raise ValueError('edge_order must be 1 or 2.')
else:
pass
y = np.asfarray(y)
origshape = y.shape
if y.ndim!=1 and y.ndim!=2:
raise ValueError('y can only be 1d or 2d.')
elif y.ndim==1:
y = np.atleast_2d(y).T
elif y.ndim==2:
if y.shape[0]<2:
raise ValueError('The number of y samples must be atleast 2.')
else:
pass
else:
pass
n,k = y.shape
if h is None:
h = 1.0
else:
h = np.asfarray(h)
if h.ndim!=0 and h.ndim!=1:
raise ValueError('h can only be 0d or 1d.')
elif h.ndim==0:
pass
elif h.ndim==1 and h.size!=n:
raise ValueError('If h is 1d, it must have the same number as the components of y.')
else:
pass
d2y = np.zeros_like(y)
if n==2:
pass
elif n==3:
d2y[:] = ( 1/h**2 * (y[0] - 2*y[1] + y[2]) )
else:
d2y = np.zeros_like(y)
d2y[1:-1]=1/h**2 * ( y[:-2] - 2*y[1:-1] + y[2:] )
if edge_order==1:
d2y[0]=1/h**2 * ( y[0] - 2*y[1] + y[2] )
d2y[-1]=1/h**2 * ( y[-1] - 2*y[-2] + y[-3] )
else:
d2y[0]=1/h**2 * ( 2*y[0] - 5*y[1] + 4*y[2] - y[3] )
d2y[-1]=1/h**2 * ( 2*y[-1] - 5*y[-2] + 4*y[-3] - y[-4] )
return d2y.reshape(origshape)
Using your example,
# After importing the function from the script file or running it
from numpy import *
from matplotlib.pyplot import *
x, h = linspace(0, 10, 17) # use a fairly coarse grid to see the discrepancies better
y = sin(x)
ypp = -sin(x) # analytical 2nd derivatives
# Compute numerically the 2nd derivatives using 2nd-order finite differences at the edge points
d2y = gradient2_even(y, h, 2)
# Compute numerically the 2nd derivatives using nested gradient function
d2y2 = gradient(gradient(y, h, edge_order=2), h, edge_order=2)
# Compute numerically the 2nd derivatives using 1st-order finite differences at the edge points
d2y3 = gradient2_even(y, h, 1)
fig,ax=subplots(1,1)
ax.plot(x, ypp, x, d2y, 'o', x, d2y2, 'o', x, d2y3, 'o'), ax.grid()
ax.legend(['Analytical', 'edge_order=2', 'nested gradient', 'edge_order=1'])
fig.tight_layout()

Categories