I am trying to calculate the 2nd-order gradient numerically of an array in numpy.
a = np.sin(np.arange(0, 10, .01))
da = np.gradient(a)
dda = np.gradient(da)
This is what I come up. Is the the way it should be done?
I am asking this, because in numpy there isn't an option saying np.gradient(a, order=2). I am concerned about whether this usage is wrong, and that is why numpy does not have this implemented.
PS1: I do realize that there is np.diff(a, 2). But this is only single-sided estimation, so I was curious why np.gradient does not have a similar keyword.
PS2: The np.sin() is a toy data - the real data does not have an analytic form.
Thank you!
I'll second #jrennie's first sentence - it can all depend. The numpy.gradient function requires that the data be evenly spaced (although allows for different distances in each direction if multi-dimensional). If your data does not adhere to this, than numpy.gradient isn't going to be much use. Experimental data may have (OK, will have) noise on it, in addition to not necessarily being all evenly spaced. In this case it might be better to use one of the scipy.interpolate spline functions (or objects). These can take unevenly spaced data, allow for smoothing, and can return derivatives up to k-1 where k is the order of the spline fit requested. The default value for k is 3, so a second derivative is just fine.
Example:
spl = scipy.interpolate.splrep(x,y,k=3) # no smoothing, 3rd order spline
ddy = scipy.interpolate.splev(x,spl,der=2) # use those knots to get second derivative
The object oriented splines like scipy.interpolate.UnivariateSpline have methods for the derivatives. Note that the derivative methods are implemented in Scipy 0.13 and are not present in 0.12.
Note that, as pointed out by #JosephCottham in comments in 2018, this answer (good for Numpy 1.08 at least), is no longer applicable since (at least) Numpy 1.14. Check your version number and the available options for the call.
There's no universal right answer for numerical gradient calculation. Before you can calculate the gradient about sample data, you have to make some assumption about the underlying function that generated that data. You can technically use np.diff for gradient calculation. Using np.gradient is a reasonable approach. I don't see anything fundamentally wrong with what you are doing---it's one particular approximation of the 2nd derivative of a 1-D function.
The double gradient approach fails for discontinuities in the first derivative.
As the gradient function takes one data point to the left and to the right into account, this continues/spreads when applying it multiple times.
On the other hand side, the second derivative can be calculated by the formula
d^2 f(x[i]) / dx^2 = (f(x[i-1]) - 2*f(x[i]) + f(x[i+1])) / h^2
compare here. This has the advantage to just take the two neighboring pixels into account.
In the picture the double np.gradient approach (left) and the above mentioned formula (right), as implemented by np.diff are compared. As f(x) has only one kink at zero, the second derivative (green) should only there have a peak.
As the double gradient solution takes 2 neighboring points in each direction into account, this leads to finite second derivative values at +/- 1.
In some cases, however, you may want to prefer the double gradient solution, as this is more robust to noise.
I am not sure why there is np.gradient and np.diff, but a reason might be, that the second argument of np.gradient defines the pixel distance (for each dimension) and for images it can be applied for both dimensions simultaneously gy, gx = np.gradient(a).
Code
import numpy as np
import matplotlib.pyplot as plt
xs = np.arange(-5,6,1)
f = np.abs(xs)
f_x = np.gradient(f)
f_xx_bad = np.gradient(f_x)
f_xx_good = np.diff(f, 2)
test = f[:-2] - 2* f[1:-1] + f[2:]
# lets plot all this
fig, axs = plt.subplots(1, 2, figsize=(9, 3), sharey=True)
ax = axs[0]
ax.set_title('bad: double gradient')
ax.plot(xs, f, marker='o', label='f(x)')
ax.plot(xs, f_x, marker='o', label='d f(x) / dx')
ax.plot(xs, f_xx_bad, marker='o', label='d^2 f(x) / dx^2')
ax.legend()
ax = axs[1]
ax.set_title('good: diff with n=2')
ax.plot(xs, f, marker='o', label='f(x)')
ax.plot(xs, f_x, marker='o', label='d f(x) / dx')
ax.plot(xs[1:-1], f_xx_good, marker='o', label='d^2 f(x) / dx^2')
ax.plot(xs[1:-1], test, marker='o', label='test', markersize=1)
ax.legend()
As I keep stepping over this problem in one form or the other again and again, I decided to write a function gradient_n, which adds an differentiation oder functionality to np.gradient. Not all functionalities of np.gradient are supported, like differentiation of mutiple axis.
Like np.gradient, gradient_n returns the differentiated result in the same shape as the input. Also a pixel distance argument (d) is supported.
import numpy as np
def gradient_n(arr, n, d=1, axis=0):
"""Differentiate np.ndarray n times.
Similar to np.diff, but additional support of pixel distance d
and padding of the result to the same shape as arr.
If n is even: np.diff is applied and the result is zero-padded
If n is odd:
np.diff is applied n-1 times and zero-padded.
Then gradient is applied. This ensures the right output shape.
"""
n2 = int((n // 2) * 2)
diff = arr
if n2 > 0:
a0 = max(0, axis)
a1 = max(0, arr.ndim-axis-1)
diff = np.diff(arr, n2, axis=axis) / d**n2
diff = np.pad(diff, tuple([(0,0)]*a0 + [(1,1)] +[(0,0)]*a1),
'constant', constant_values=0)
if n > n2:
assert n-n2 == 1, 'n={:f}, n2={:f}'.format(n, n2)
diff = np.gradient(diff, d, axis=axis)
return diff
def test_gradient_n():
import matplotlib.pyplot as plt
x = np.linspace(-4, 4, 17)
y = np.linspace(-2, 2, 9)
X, Y = np.meshgrid(x, y)
arr = np.abs(X)
arr_x = np.gradient(arr, .5, axis=1)
arr_x2 = gradient_n(arr, 1, .5, axis=1)
arr_xx = np.diff(arr, 2, axis=1) / .5**2
arr_xx = np.pad(arr_xx, ((0, 0), (1, 1)), 'constant', constant_values=0)
arr_xx2 = gradient_n(arr, 2, .5, axis=1)
assert np.sum(arr_x - arr_x2) == 0
assert np.sum(arr_xx - arr_xx2) == 0
fig, axs = plt.subplots(2, 2, figsize=(29, 21))
axs = np.array(axs).flatten()
ax = axs[0]
ax.set_title('x-cut')
ax.plot(x, arr[0, :], marker='o', label='arr')
ax.plot(x, arr_x[0, :], marker='o', label='arr_x')
ax.plot(x, arr_x2[0, :], marker='x', label='arr_x2', ls='--')
ax.plot(x, arr_xx[0, :], marker='o', label='arr_xx')
ax.plot(x, arr_xx2[0, :], marker='x', label='arr_xx2', ls='--')
ax.legend()
ax = axs[1]
ax.set_title('arr')
im = ax.imshow(arr, cmap='bwr')
cbar = ax.figure.colorbar(im, ax=ax, pad=.05)
ax = axs[2]
ax.set_title('arr_x')
im = ax.imshow(arr_x, cmap='bwr')
cbar = ax.figure.colorbar(im, ax=ax, pad=.05)
ax = axs[3]
ax.set_title('arr_xx')
im = ax.imshow(arr_xx, cmap='bwr')
cbar = ax.figure.colorbar(im, ax=ax, pad=.05)
test_gradient_n()
This is an excerpt from the original documentation (at the time of writing found at http://docs.scipy.org/doc/numpy/reference/generated/numpy.gradient.html). It states that unless the sampling distance is 1 you need to include a list containing the distances as an argument.
numpy.gradient(f, *varargs, **kwargs)
Return the gradient of an N-dimensional array.
The gradient is computed using second order accurate central differences in the interior and either first differences or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array.
Parameters:
f : array_like
An N-dimensional array containing samples of a scalar function.
varargs : list of scalar, optional
N scalars specifying the sample distances for each dimension, i.e. dx, dy, dz, ... Default distance: 1.
edge_order : {1, 2}, optional
Gradient is calculated using Nth order accurate differences at the boundaries. Default: 1.
New in version 1.9.1.
Returns:
gradient : ndarray
N arrays of the same shape as f giving the derivative of f with respect to each dimension.
My solution is to create a function similar to np.gradient that calculates the 2nd derivatives numerically from the array data.
import numpy as np
def gradient2_even(y, h=None, edge_order=1):
"""
Return the 2nd-order gradient i.e.
2nd derivatives of y with n samples and k components.
The 2nd-order gradient is computed using second-order-accurate central differences
in the interior points and either first or second order accurate one-sided
(forward or backwards) differences at the boundaries.
The returned gradient hence has the same shape as the input array.
Parameters
----------
y : 1d or 2d array_like
The array containing the samples. If 2d with shape (n,k),
n is the number of samples at least 2 while k is the number of
y series/components. 1d input is equivalent to 2d input with shape (n,1).
h : constant or 1d, optional
spacing between the y samples. Default unitary spacing for
all y components. Spacing can be specified using:
1. Single scalar spacing value for all y components.
2. 1d array_like of length k specifying the spacing for each y component
edge_order : {1, 2}, optional
Order 1 means 3-point forward/backward finite differences
are used to calculate the 2nd derivatves at the edge points while
order 2 uses 4-point forward/backward finite differences.
Returns
----------
d2y : 1d or 2d array
Array containing the 2nd derivatives. The output shape is the same as y.
"""
if edge_order!=1 and edge_order!=2:
raise ValueError('edge_order must be 1 or 2.')
else:
pass
y = np.asfarray(y)
origshape = y.shape
if y.ndim!=1 and y.ndim!=2:
raise ValueError('y can only be 1d or 2d.')
elif y.ndim==1:
y = np.atleast_2d(y).T
elif y.ndim==2:
if y.shape[0]<2:
raise ValueError('The number of y samples must be atleast 2.')
else:
pass
else:
pass
n,k = y.shape
if h is None:
h = 1.0
else:
h = np.asfarray(h)
if h.ndim!=0 and h.ndim!=1:
raise ValueError('h can only be 0d or 1d.')
elif h.ndim==0:
pass
elif h.ndim==1 and h.size!=n:
raise ValueError('If h is 1d, it must have the same number as the components of y.')
else:
pass
d2y = np.zeros_like(y)
if n==2:
pass
elif n==3:
d2y[:] = ( 1/h**2 * (y[0] - 2*y[1] + y[2]) )
else:
d2y = np.zeros_like(y)
d2y[1:-1]=1/h**2 * ( y[:-2] - 2*y[1:-1] + y[2:] )
if edge_order==1:
d2y[0]=1/h**2 * ( y[0] - 2*y[1] + y[2] )
d2y[-1]=1/h**2 * ( y[-1] - 2*y[-2] + y[-3] )
else:
d2y[0]=1/h**2 * ( 2*y[0] - 5*y[1] + 4*y[2] - y[3] )
d2y[-1]=1/h**2 * ( 2*y[-1] - 5*y[-2] + 4*y[-3] - y[-4] )
return d2y.reshape(origshape)
Using your example,
# After importing the function from the script file or running it
from numpy import *
from matplotlib.pyplot import *
x, h = linspace(0, 10, 17) # use a fairly coarse grid to see the discrepancies better
y = sin(x)
ypp = -sin(x) # analytical 2nd derivatives
# Compute numerically the 2nd derivatives using 2nd-order finite differences at the edge points
d2y = gradient2_even(y, h, 2)
# Compute numerically the 2nd derivatives using nested gradient function
d2y2 = gradient(gradient(y, h, edge_order=2), h, edge_order=2)
# Compute numerically the 2nd derivatives using 1st-order finite differences at the edge points
d2y3 = gradient2_even(y, h, 1)
fig,ax=subplots(1,1)
ax.plot(x, ypp, x, d2y, 'o', x, d2y2, 'o', x, d2y3, 'o'), ax.grid()
ax.legend(['Analytical', 'edge_order=2', 'nested gradient', 'edge_order=1'])
fig.tight_layout()
Related
Goal
I would like to compute the 3D volume integral of a numeric scalar field.
Code
For this post, I will use an example of which the integral can be exactly computed. I have therefore chosen the following function:
In Python, I define the function, and a set of points in 3D, and then generate the discrete values at these points:
import numpy as np
# Make data.
def function(x, y, z):
return x**y**z
N = 5
grid = np.meshgrid(
np.linspace(0, 1, N),
np.linspace(0, 1, N),
np.linspace(0, 1, N)
)
points = np.vstack(list(map(np.ravel, grid))).T
x = points[:, 0]
y = points[:, 1]
z = points[:, 2]
values = [function(points[i, 0], points[i, 1], points[i, 2])
for i in range(len(points))]
Question
How can I find the integral, if I don't know the underlying function, i.e. if I only have the coordinates (x, y, z) and the values?
A nice way to go about this would be using scipy's tplquad integration. However, to use that, we need a function and not a cloud point.
An easy way around that is to use an interpolator, to get a function approximating our cloud point - we can for example use scipy's RegularGridInterpolator if the data is on a regular grid:
import numpy as np
from scipy import integrate
from scipy.interpolate import RegularGridInterpolator
# Make data.
def function(x,y,z):
return x*y*z
N = 5
xmin, xmax = 0, 1
ymin, ymax = 0, 1
zmin, zmax = 0, 1
x = np.linspace(xmin, xmax, N)
y = np.linspace(ymin, ymax, N)
z = np.linspace(zmin, zmax, N)
values = function(*np.meshgrid(x,y,z, indexing='ij'))
# Interpolate:
function_interpolated = RegularGridInterpolator((x, y, z), values)
# tplquad integrates func(z,y,x)
f = lambda z,y,x : my_interpolating_function([z,y,x])
result, error = integrate.tplquad(f, xmin, xmax, lambda _: ymin, lambda _:ymax,lambda *_: zmin, lambda *_: zmax)
In the example above, we get result = 0.12499999999999999 - close enough!
The easiest way to achieve what you are looking for is probably scipy's integration function. Here your example:
from scipy import integrate
# Make data.
def func(x,y,z):
return x**y**z
ranges = [[0,1], [0,1], [0,1]]
result, error = integrate.nquad(func, ranges)
Are you aware that the function that you created is different from the one that you show in the image. The one you created is an exponential (x^y^z) while the one that you are showing is just multiplications. If you want to represent the function in the image, use
def func(x,y,z):
return x*y*z
Hope this answers your question, otherwise just write a comment!
Edit:
Misread your post. If you only have the results, and they are not regularly spaced, you would have to figure out some form of interpolation (i.e. linear) and a lookup-table. If you do not know how to create that, let me know. The rest of the stated answer could still be used if you define func to return interpolated values from your original data
The first answer explains nicely the principal approach to handle this. Just wanted to illustrate an alternative way by showing the power of sklearn package and machine learning regression.
Doing the meshgrid in 3D gives a very large numpy array,
import numpy as np
N = 5
xmin, xmax = 0, 1
ymin, ymax = 0, 1
zmin, zmax = 0, 1
x = np.linspace(xmin, xmax, N)
y = np.linspace(ymin, ymax, N)
z = np.linspace(zmin, zmax, N)
grid = np.array(np.meshgrid(x,y,z, indexing='ij'))
grid.shape = (3, 5, 5, 5) # 2*5*5*5 = 250 numbers
Which is visually not very intuitive with 250 numbers. With different possible indexing ('ij' or 'xy'). Using regression we can get the same result with few input points (15-20).
# building random combinations from (x,y,z)
X = np.random.choice(x, 20)[:,None]
Y = np.random.choice(y, 20)[:,None]
Z = np.random.choice(z, 20)[:,None]
xyz = np.concatenate((X,Y,Z), axis = 1)
data = np.multiply.reduce(xyz, axis = 1)
So the input (grid) is just a 2D numpy array,
xyz.shape
(20, 3)
With the corresponding data,
data.shape = (20,)
Now the regression function and integration,
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
from scipy import integrate
pipe=Pipeline([('polynomial',PolynomialFeatures(degree=3)),('modal',LinearRegression())])
pipe.fit(xyz, data)
def func(x,y,z):
return pipe.predict([[x, y, z]])
ranges = [[0,1], [0,1], [0,1]]
result, error = integrate.nquad(func, ranges)
print(result)
0.1257
This approach is useful with limited number of points.
Based on your requirements, it sounds like the most appropriate technique would be Monte Carlo integration:
# Step 0 start with some empirical data
observed_points = np.random.uniform(0,1,size=(10000,3))
unknown_fn = lambda x: np.prod(x) # just used to generate fake values
observed_values = np.apply_along_axis(unknown_fn, 1, observed_points)
K = 1000000
# Step 1 - assume that f(x,y,z) can be approximated by an interpolation
# of the data we have (you could get really fancy with the
# selection of interpolation method - we'll stick with straight lines here)
from scipy.interpolate import LinearNDInterpolator
f_interpolate = LinearNDInterpolator(observed_points, observed_values)
# Step 2 randomly sample from within convex hull of observed data
# Step 2a - Uniformly sample from bounding 3D-box of data
lower_bounds = observed_points.min(axis=0)
upper_bounds = observed_points.max(axis=0)
sampled_points = np.random.uniform(lower_bounds, upper_bounds,size=(K, 3))
# Step 2b - Reject points outside of convex hull...
# Luckily, we get a np.nan from LinearNDInterpolator in this case
sampled_values = f_interpolate(sampled_points)
rejected_idxs = np.argwhere(np.isnan(sampled_values))
# Step 2c - Remember accepted values of estimated f(x_i, y_i, z_i)
final_sampled_values = np.delete(sampled_values, rejected_idxs, axis=0)
# Step 3 - Calculate estimate of volume of observed data domain
# Since we sampled uniformly from the convex hull of data domain,
# each point was selected with P(x,y,z)= 1 / Volume of convex hull
volume = scipy.spatial.ConvexHull(observed_points).volume
# Step 4 - Multiply estimated volume of domain by average sampled value
I_hat = volume * final_sampled_values.mean()
print(I_hat)
For a derivation of why this works see this: https://cs.dartmouth.edu/wjarosz/publications/dissertation/appendixA.pdf
I need to create a function which would be the inverse of the np.gradient function.
Where the Vx,Vy arrays (Velocity component vectors) are the input and the output would be an array of anti-derivatives (Arrival Time) at the datapoints x,y.
I have data on a (x,y) grid with scalar values (time) at each point.
I have used the numpy gradient function and linear interpolation to determine the gradient vector Velocity (Vx,Vy) at each point (See below).
I have achieved this by:
#LinearTriInterpolator applied to a delaunay triangular mesh
LTI= LinearTriInterpolator(masked_triang, time_array)
#Gradient requested at the mesh nodes:
(Vx, Vy) = LTI.gradient(triang.x, triang.y)
The first image below shows the velocity vectors at each point, and the point labels represent the time value which formed the derivatives (Vx,Vy)
The next image shows the resultant scalar value of the derivatives (Vx,Vy) plotted as a colored contour graph with associated node labels.
So my challenge is:
I need to reverse the process!
Using the gradient vectors (Vx,Vy) or the resultant scalar value to determine the original Time-Value at that point.
Is this possible?
Knowing that the numpy.gradient function is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries, I am sure there is a function which would reverse this process.
I was thinking that taking a line derivative between the original point (t=0 at x1,y1) to any point (xi,yi) over the Vx,Vy plane would give me the sum of the velocity components. I could then divide this value by the distance between the two points to get the time taken..
Would this approach work? And if so, which numpy integrate function would be best applied?
An example of my data can be found here [http://www.filedropper.com/calculatearrivaltimefromgradientvalues060820]
Your help would be greatly appreciated
EDIT:
Maybe this simplified drawing might help understand where I'm trying to get to..
EDIT:
Thanks to #Aguy who has contibuted to this code.. I Have tried to get a more accurate representation using a meshgrid of spacing 0.5 x 0.5m and calculating the gradient at each meshpoint, however I am not able to integrate it properly. I also have some edge affects which are affecting the results that I don't know how to correct.
import numpy as np
from scipy import interpolate
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
#Createmesh grid with a spacing of 0.5 x 0.5
stepx = 0.5
stepy = 0.5
xx = np.arange(min(x), max(x), stepx)
yy = np.arange(min(y), max(y), stepy)
xgrid, ygrid = np.meshgrid(xx, yy)
grid_z1 = interpolate.griddata((x,y), Arrival_Time, (xgrid, ygrid), method='linear') #Interpolating the Time values
#Formatdata
X = np.ravel(xgrid)
Y= np.ravel(ygrid)
zs = np.ravel(grid_z1)
Z = zs.reshape(X.shape)
#Calculate Gradient
(dx,dy) = np.gradient(grid_z1) #Find gradient for points on meshgrid
Velocity_dx= dx/stepx #velocity ms/m
Velocity_dy= dy/stepx #velocity ms/m
Resultant = (Velocity_dx**2 + Velocity_dy**2)**0.5 #Resultant scalar value ms/m
Resultant = np.ravel(Resultant)
#Plot Original Data F(X,Y) on the meshgrid
fig = pyplot.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(x,y,Arrival_Time,color='r')
ax.plot_trisurf(X, Y, Z)
ax.set_xlabel('X-Coordinates')
ax.set_ylabel('Y-Coordinates')
ax.set_zlabel('Time (ms)')
pyplot.show()
#Plot the Derivative of f'(X,Y) on the meshgrid
fig = pyplot.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(X,Y,Resultant,color='r',s=0.2)
ax.plot_trisurf(X, Y, Resultant)
ax.set_xlabel('X-Coordinates')
ax.set_ylabel('Y-Coordinates')
ax.set_zlabel('Velocity (ms/m)')
pyplot.show()
#Integrate to compare the original data input
dxintegral = np.nancumsum(Velocity_dx, axis=1)*stepx
dyintegral = np.nancumsum(Velocity_dy, axis=0)*stepy
valintegral = np.ma.zeros(dxintegral.shape)
for i in range(len(yy)):
for j in range(len(xx)):
valintegral[i, j] = np.ma.sum([dxintegral[0, len(xx) // 2],
dyintegral[i, len(yy) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]])
valintegral = valintegral * np.isfinite(dxintegral)
Now the np.gradient is applied at every meshnode (dx,dy) = np.gradient(grid_z1)
Now in my process I would analyse the gradient values above and make some adjustments (There is some unsual edge effects that are being create which I need to rectify) and would then integrate the values to get back to a surface which would be very similar to f(x,y) shown above.
I need some help adjusting the integration function:
#Integrate to compare the original data input
dxintegral = np.nancumsum(Velocity_dx, axis=1)*stepx
dyintegral = np.nancumsum(Velocity_dy, axis=0)*stepy
valintegral = np.ma.zeros(dxintegral.shape)
for i in range(len(yy)):
for j in range(len(xx)):
valintegral[i, j] = np.ma.sum([dxintegral[0, len(xx) // 2],
dyintegral[i, len(yy) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]])
valintegral = valintegral * np.isfinite(dxintegral)
And now I need to calculate the new 'Time' values at the original (x,y) point locations.
UPDATE (08-09-20) : I am getting some promising results using the help from #Aguy. The results can be seen below (with the blue contours representing the original data, and the red contours representing the integrated values).
I am still working on an integration approach which can remove the inaccuarcies at the areas of min(y) and max(y)
from matplotlib.tri import (Triangulation, UniformTriRefiner,
CubicTriInterpolator,LinearTriInterpolator,TriInterpolator,TriAnalyzer)
import pandas as pd
from scipy.interpolate import griddata
import matplotlib.pyplot as plt
import numpy as np
from scipy import interpolate
#-------------------------------------------------------------------------
# STEP 1: Import data from Excel file, and set variables
#-------------------------------------------------------------------------
df_initial = pd.read_excel(
r'C:\Users\morga\PycharmProjects\venv\Development\Trial'
r'.xlsx')
Inputdata can be found here link
df_initial = df_initial .sort_values(by='Delay', ascending=True) #Update dataframe and sort by Delay
x = df_initial ['X'].to_numpy()
y = df_initial ['Y'].to_numpy()
Arrival_Time = df_initial ['Delay'].to_numpy()
# Createmesh grid with a spacing of 0.5 x 0.5
stepx = 0.5
stepy = 0.5
xx = np.arange(min(x), max(x), stepx)
yy = np.arange(min(y), max(y), stepy)
xgrid, ygrid = np.meshgrid(xx, yy)
grid_z1 = interpolate.griddata((x, y), Arrival_Time, (xgrid, ygrid), method='linear') # Interpolating the Time values
# Calculate Gradient (velocity ms/m)
(dy, dx) = np.gradient(grid_z1) # Find gradient for points on meshgrid
Velocity_dx = dx / stepx # x velocity component ms/m
Velocity_dy = dy / stepx # y velocity component ms/m
# Integrate to compare the original data input
dxintegral = np.nancumsum(Velocity_dx, axis=1) * stepx
dyintegral = np.nancumsum(Velocity_dy, axis=0) * stepy
valintegral = np.ma.zeros(dxintegral.shape) # Makes an array filled with 0's the same shape as dx integral
for i in range(len(yy)):
for j in range(len(xx)):
valintegral[i, j] = np.ma.sum(
[dxintegral[0, len(xx) // 2], dyintegral[i, len(xx) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]])
valintegral[np.isnan(dx)] = np.nan
min_value = np.nanmin(valintegral)
valintegral = valintegral + (min_value * -1)
##Plot Results
fig = plt.figure()
ax = fig.add_subplot()
ax.scatter(x, y, color='black', s=7, zorder=3)
ax.set_xlabel('X-Coordinates')
ax.set_ylabel('Y-Coordinates')
ax.contour(xgrid, ygrid, valintegral, levels=50, colors='red', zorder=2)
ax.contour(xgrid, ygrid, grid_z1, levels=50, colors='blue', zorder=1)
ax.set_aspect('equal')
plt.show()
TL;DR;
You have multiple challenges to address in this issue, mainly:
Potential reconstruction (scalar field) from its gradient (vector field)
But also:
Observation in a concave hull with non rectangular grid;
Numerical 2D line integration and numerical inaccuracy;
It seems it can be solved by choosing an adhoc interpolant and a smart way to integrate (as pointed out by #Aguy).
MCVE
In a first time, let's build a MCVE to highlight above mentioned key points.
Dataset
We recreate a scalar field and its gradient.
import numpy as np
from scipy import interpolate
import matplotlib.pyplot as plt
def f(x, y):
return x**2 + x*y + 2*y + 1
Nx, Ny = 21, 17
xl = np.linspace(-3, 3, Nx)
yl = np.linspace(-2, 2, Ny)
X, Y = np.meshgrid(xl, yl)
Z = f(X, Y)
zl = np.arange(np.floor(Z.min()), np.ceil(Z.max())+1, 2)
dZdy, dZdx = np.gradient(Z, yl, xl, edge_order=1)
V = np.hypot(dZdx, dZdy)
The scalar field looks like:
axe = plt.axes(projection='3d')
axe.plot_surface(X, Y, Z, cmap='jet', alpha=0.5)
axe.view_init(elev=25, azim=-45)
And, the vector field looks like:
axe = plt.contour(X, Y, Z, zl, cmap='jet')
axe.axes.quiver(X, Y, dZdx, dZdy, V, units='x', pivot='tip', cmap='jet')
axe.axes.set_aspect('equal')
axe.axes.grid()
Indeed gradient is normal to potential levels. We also plot the gradient magnitude:
axe = plt.contour(X, Y, V, 10, cmap='jet')
axe.axes.set_aspect('equal')
axe.axes.grid()
Raw field reconstruction
If we naively reconstruct the scalar field from the gradient:
SdZx = np.cumsum(dZdx, axis=1)*np.diff(xl)[0]
SdZy = np.cumsum(dZdy, axis=0)*np.diff(yl)[0]
Zhat = np.zeros(SdZx.shape)
for i in range(Zhat.shape[0]):
for j in range(Zhat.shape[1]):
Zhat[i,j] += np.sum([SdZy[i,0], -SdZy[0,0], SdZx[i,j], -SdZx[i,0]])
Zhat += Z[0,0] - Zhat[0,0]
We can see the global result is roughly correct, but levels are less accurate where the gradient magnitude is low:
Interpolated field reconstruction
If we increase the grid resolution and pick a specific interpolant (usual when dealing with mesh grid), we can get a finer field reconstruction:
r = np.stack([X.ravel(), Y.ravel()]).T
Sx = interpolate.CloughTocher2DInterpolator(r, dZdx.ravel())
Sy = interpolate.CloughTocher2DInterpolator(r, dZdy.ravel())
Nx, Ny = 200, 200
xli = np.linspace(xl.min(), xl.max(), Nx)
yli = np.linspace(yl.min(), yl.max(), Nx)
Xi, Yi = np.meshgrid(xli, yli)
ri = np.stack([Xi.ravel(), Yi.ravel()]).T
dZdxi = Sx(ri).reshape(Xi.shape)
dZdyi = Sy(ri).reshape(Xi.shape)
SdZxi = np.cumsum(dZdxi, axis=1)*np.diff(xli)[0]
SdZyi = np.cumsum(dZdyi, axis=0)*np.diff(yli)[0]
Zhati = np.zeros(SdZxi.shape)
for i in range(Zhati.shape[0]):
for j in range(Zhati.shape[1]):
Zhati[i,j] += np.sum([SdZyi[i,0], -SdZyi[0,0], SdZxi[i,j], -SdZxi[i,0]])
Zhati += Z[0,0] - Zhati[0,0]
Which definitely performs way better:
So basically, increasing the grid resolution with an adhoc interpolant may help you to get more accurate result. The interpolant also solve the need to get a regular rectangular grid from a triangular mesh to perform integration.
Concave and convex hull
You also have pointed out inaccuracy on the edges. Those are the result of the combination of the interpolant choice and the integration methodology. The integration methodology fails to properly compute the scalar field when it reach concave region with few interpolated points. The problem disappear when choosing a mesh-free interpolant able to extrapolate.
To illustrate it, let's remove some data from our MCVE:
q = np.full(dZdx.shape, False)
q[0:6,5:11] = True
q[-6:,-6:] = True
dZdx[q] = np.nan
dZdy[q] = np.nan
Then the interpolant can be constructed as follow:
q2 = ~np.isnan(dZdx.ravel())
r = np.stack([X.ravel(), Y.ravel()]).T[q2,:]
Sx = interpolate.CloughTocher2DInterpolator(r, dZdx.ravel()[q2])
Sy = interpolate.CloughTocher2DInterpolator(r, dZdy.ravel()[q2])
Performing the integration we see that in addition of classical edge effect we do have less accurate value in concave regions (swingy dot-dash lines where the hull is concave) and we have no data outside the convex hull as Clough Tocher is a mesh-based interpolant:
Vl = np.arange(0, 11, 1)
axe = plt.contour(X, Y, np.hypot(dZdx, dZdy), Vl, cmap='jet')
axe.axes.contour(Xi, Yi, np.hypot(dZdxi, dZdyi), Vl, cmap='jet', linestyles='-.')
axe.axes.set_aspect('equal')
axe.axes.grid()
So basically the error we are seeing on the corner are most likely due to integration issue combined with interpolation limited to the convex hull.
To overcome this we can choose a different interpolant such as RBF (Radial Basis Function Kernel) which is able to create data outside the convex hull:
Sx = interpolate.Rbf(r[:,0], r[:,1], dZdx.ravel()[q2], function='thin_plate')
Sy = interpolate.Rbf(r[:,0], r[:,1], dZdy.ravel()[q2], function='thin_plate')
dZdxi = Sx(ri[:,0], ri[:,1]).reshape(Xi.shape)
dZdyi = Sy(ri[:,0], ri[:,1]).reshape(Xi.shape)
Notice the slightly different interface of this interpolator (mind how parmaters are passed).
The result is the following:
We can see the region outside the convex hull can be extrapolated (RBF are mesh free). So choosing the adhoc interpolant is definitely a key point to solve your problem. But we still need to be aware that extrapolation may perform well but is somehow meaningless and dangerous.
Solving your problem
The answer provided by #Aguy is perfectly fine as it setups a clever way to integrate that is not disturbed by missing points outside the convex hull. But as you mentioned there is inaccuracy in concave region inside the convex hull.
If you wish to remove the edge effect you detected, you will have to resort to an interpolant able to extrapolate as well, or find another way to integrate.
Interpolant change
Using RBF interpolant seems to solve your problem. Here is the complete code:
df = pd.read_excel('./Trial-Wireup 2.xlsx')
x = df['X'].to_numpy()
y = df['Y'].to_numpy()
z = df['Delay'].to_numpy()
r = np.stack([x, y]).T
#S = interpolate.CloughTocher2DInterpolator(r, z)
#S = interpolate.LinearNDInterpolator(r, z)
S = interpolate.Rbf(x, y, z, epsilon=0.1, function='thin_plate')
N = 200
xl = np.linspace(x.min(), x.max(), N)
yl = np.linspace(y.min(), y.max(), N)
X, Y = np.meshgrid(xl, yl)
#Zp = S(np.stack([X.ravel(), Y.ravel()]).T)
Zp = S(X.ravel(), Y.ravel())
Z = Zp.reshape(X.shape)
dZdy, dZdx = np.gradient(Z, yl, xl, edge_order=1)
SdZx = np.nancumsum(dZdx, axis=1)*np.diff(xl)[0]
SdZy = np.nancumsum(dZdy, axis=0)*np.diff(yl)[0]
Zhat = np.zeros(SdZx.shape)
for i in range(Zhat.shape[0]):
for j in range(Zhat.shape[1]):
#Zhat[i,j] += np.nansum([SdZy[i,0], -SdZy[0,0], SdZx[i,j], -SdZx[i,0]])
Zhat[i,j] += np.nansum([SdZx[0,N//2], SdZy[i,N//2], SdZx[i,j], -SdZx[i,N//2]])
Zhat += Z[100,100] - Zhat[100,100]
lz = np.linspace(0, 5000, 20)
axe = plt.contour(X, Y, Z, lz, cmap='jet')
axe = plt.contour(X, Y, Zhat, lz, cmap='jet', linestyles=':')
axe.axes.plot(x, y, '.', markersize=1)
axe.axes.set_aspect('equal')
axe.axes.grid()
Which graphically renders as follow:
The edge effect is gone because of the RBF interpolant can extrapolate over the whole grid. You can confirm it by comparing the result of mesh-based interpolants.
Linear
Clough Tocher
Integration variable order change
We can also try to find a better way to integrate and mitigate the edge effect, eg. let's change the integration variable order:
Zhat[i,j] += np.nansum([SdZy[N//2,0], SdZx[N//2,j], SdZy[i,j], -SdZy[N//2,j]])
With a classic linear interpolant. The result is quite correct, but we still have an edge effect on the bottom left corner:
As you noticed the problem occurs at the middle of the axis in region where the integration starts and lacks a reference point.
Here is one approach:
First, in order to be able to do integration, it's good to be on a regular grid. Using here variable names x and y as short for your triang.x and triang.y we can first create a grid:
import numpy as np
n = 200 # Grid density
stepx = (max(x) - min(x)) / n
stepy = (max(y) - min(y)) / n
xspace = np.arange(min(x), max(x), stepx)
yspace = np.arange(min(y), max(y), stepy)
xgrid, ygrid = np.meshgrid(xspace, yspace)
Then we can interpolate dx and dy on the grid using the same LinearTriInterpolator function:
fdx = LinearTriInterpolator(masked_triang, dx)
fdy = LinearTriInterpolator(masked_triang, dy)
dxgrid = fdx(xgrid, ygrid)
dygrid = fdy(xgrid, ygrid)
Now comes the integration part. In principle, any path we choose should get us to the same value. In practice, since there are missing values and different densities, the choice of path is very important to get a reasonably accurate answer.
Below I choose to integrate over dxgrid in the x direction from 0 to the middle of the grid at n/2. Then integrate over dygrid in the y direction from 0 to the i point of interest. Then over dxgrid again from n/2 to the point j of interest. This is a simple way to make sure most of the path of integration is inside the bulk of available data by simply picking a path that goes mostly in the "middle" of the data range. Other alternative consideration would lead to different path selections.
So we do:
dxintegral = np.nancumsum(dxgrid, axis=1) * stepx
dyintegral = np.nancumsum(dygrid, axis=0) * stepy
and then (by somewhat brute force for clarity):
valintegral = np.ma.zeros(dxintegral.shape)
for i in range(n):
for j in range(n):
valintegral[i, j] = np.ma.sum([dxintegral[0, n // 2], dyintegral[i, n // 2], dxintegral[i, j], - dxintegral[i, n // 2]])
valintegral = valintegral * np.isfinite(dxintegral)
valintegral would be the result up to an arbitrary constant which can help put the "zero" where you want.
With your data shown here:
ax.tricontourf(masked_triang, time_array)
This is what I'm getting reconstructed when using this method:
ax.contourf(xgrid, ygrid, valintegral)
Hopefully this is somewhat helpful.
If you want to revisit the values at the original triangulation points, you can use interp2d on the valintegral regular grid data.
EDIT:
In reply to your edit, your adaptation above has a few errors:
Change the line (dx,dy) = np.gradient(grid_z1) to (dy,dx) = np.gradient(grid_z1)
In the integration loop change the dyintegral[i, len(yy) // 2] term to dyintegral[i, len(xx) // 2]
Better to replace the line valintegral = valintegral * np.isfinite(dxintegral) with valintegral[np.isnan(dx)] = np.nan
I was looking at the scipy cookbook implementation of the Savitzky-Golay algorithm:
#!python
def savitzky_golay(y, window_size, order, deriv=0, rate=1):
r"""Smooth (and optionally differentiate) data with a Savitzky-Golay filter.
The Savitzky-Golay filter removes high frequency noise from data.
It has the advantage of preserving the original shape and
features of the signal better than other types of filtering
approaches, such as moving averages techniques.
Parameters
----------
y : array_like, shape (N,)
the values of the time history of the signal.
window_size : int
the length of the window. Must be an odd integer number.
order : int
the order of the polynomial used in the filtering.
Must be less then `window_size` - 1.
deriv: int
the order of the derivative to compute (default = 0 means only smoothing)
Returns
-------
ys : ndarray, shape (N)
the smoothed signal (or it's n-th derivative).
Notes
-----
The Savitzky-Golay is a type of low-pass filter, particularly
suited for smoothing noisy data. The main idea behind this
approach is to make for each point a least-square fit with a
polynomial of high order over a odd-sized window centered at
the point.
Examples
--------
t = np.linspace(-4, 4, 500)
y = np.exp( -t**2 ) + np.random.normal(0, 0.05, t.shape)
ysg = savitzky_golay(y, window_size=31, order=4)
import matplotlib.pyplot as plt
plt.plot(t, y, label='Noisy signal')
plt.plot(t, np.exp(-t**2), 'k', lw=1.5, label='Original signal')
plt.plot(t, ysg, 'r', label='Filtered signal')
plt.legend()
plt.show()
References
----------
.. [1] A. Savitzky, M. J. E. Golay, Smoothing and Differentiation of
Data by Simplified Least Squares Procedures. Analytical
Chemistry, 1964, 36 (8), pp 1627-1639.
.. [2] Numerical Recipes 3rd Edition: The Art of Scientific Computing
W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery
Cambridge University Press ISBN-13: 9780521880688
"""
import numpy as np
from math import factorial
try:
window_size = np.abs(np.int(window_size))
order = np.abs(np.int(order))
except ValueError, msg:
raise ValueError("window_size and order have to be of type int")
if window_size % 2 != 1 or window_size < 1:
raise TypeError("window_size size must be a positive odd number")
if window_size < order + 2:
raise TypeError("window_size is too small for the polynomials order")
order_range = range(order+1)
half_window = (window_size -1) // 2
# precompute coefficients
b = np.mat([[k**i for i in order_range] for k in range(-half_window, half_window+1)])
m = np.linalg.pinv(b).A[deriv] * rate**deriv * factorial(deriv)
# pad the signal at the extremes with
# values taken from the signal itself
firstvals = y[0] - np.abs( y[1:half_window+1][::-1] - y[0] )
lastvals = y[-1] + np.abs(y[-half_window-1:-1][::-1] - y[-1])
y = np.concatenate((firstvals, y, lastvals))
return np.convolve( m[::-1], y, mode='valid')
This is the part that confuses me:
firstvals = y[0] - np.abs( y[1:half_window+1][::-1] - y[0] )
lastvals = y[-1] + np.abs(y[-half_window-1:-1][::-1] - y[-1])
y = np.concatenate((firstvals, y, lastvals))
I get that we need to 'pad' y, since otherwise the first window_size/2 points would be excluded, but I don't see the point of subtracting a particular value's absolute difference with y[0] from y[0].
I don't think the absolute value should be there, as otherwise, the trend gets mirrored horizontally if it starts by increasing, and vertically if it starts by decreasing.
As pointed it out by #ImportanceOfBeingErnest, this may be a typo in the code, as can be seen by looking at the left hand side of the plot in the page I linked to.
Indeed, this logic isn't right, which can be best seen by considering the case of y[0] and y[-1] being 0. I believe the intent was to achieve odd reflection, so that the first derivative would be continuous at the reflection point. The correct form for that is
firstvals = 2*y[0] - y[1:half_window+1][::-1]
lastvals = 2*y[-1] - y[-half_window-1:-1][::-1]
or, combining reversing and slicing in one step,
firstvals = 2*y[0] - y[half_window:0:-1]
lastvals = 2*y[-1] - y[-2:-half_window-2:-1]
I should emphasize this is just some code contributed by a user. The actual Scipy implementation of Savitzky-Golay filter is entirely different.
I have experimental observations in a volume:
import numpy as np
# observations are not uniformly spaced
x = np.random.normal(0, 1, 10)
y = np.random.normal(5, 2, 10)
z = np.random.normal(10, 3, 10)
xx, yy, zz = np.meshgrid(x, y, z, indexing='ij')
# fake temperatures at those coords
tt = xx*2 + yy*2 + zz*2
# sample distances
dx = np.diff(x)
dy = np.diff(y)
dz = np.diff(z)
grad = np.gradient(tt, [dx, dy, dz]) # returns error
This gives me the error:
ValueError: operands could not be broadcast together with shapes (10,10,10) (3,9) (10,10,10).
EDIT: according to #jay-kominek in the comments below:
np.gradient won't work for you, it simply doesn't handle unevenly sampled data.
I've updated the question. Is there any function which can can do my computation?
Two things to note: First, scalars are single values, not arrays. Second, the signature of the function is numpy.gradient(f, *varargs, **kwargs). Note the * before varargs. That means if varargs is a list, you pass *varargs. Or you can just provide the elements of varargs as separate arguments.
So, np.gradient wants a single value for the distance along each dimension, like:
np.gradient(tt, np.diff(x)[0], np.diff(y)[0], np.diff(z)[0])
or:
distances = [np.diff(x)[0], np.diff(y)[0], np.diff(z)[0]]
np.gradient(tt, *distances)
The required dx ... to be passed to np.gradient aren't grids of differences, but just one scalar each. So grad = np.gradient(tt,0.1,0.1,0.1)appears to work.
I'm using UnivariateSpline to construct piecewise polynomials for some data that I have. I would then like to use these splines in other programs (either in C or FORTRAN) and so I would like to understand the equation behind the generated spline.
Here is my code:
import numpy as np
import scipy as sp
from scipy.interpolate import UnivariateSpline
import matplotlib.pyplot as plt
import bisect
data = np.loadtxt('test_C12H26.dat')
Tmid = 800.0
print "Tmid", Tmid
nmid = bisect.bisect(data[:,0],Tmid)
fig = plt.figure()
plt.plot(data[:,0], data[:,7],ls='',marker='o',markevery=20)
npts = len(data[:,0])
#print "npts", npts
w = np.ones(npts)
w[0] = 100
w[nmid] = 100
w[npts-1] = 100
spline1 = UnivariateSpline(data[:nmid,0],data[:nmid,7],s=1,w=w[:nmid])
coeffs = spline1.get_coeffs()
print coeffs
print spline1.get_knots()
print spline1.get_residual()
print coeffs[0] + coeffs[1] * (data[0,0] - data[0,0]) \
+ coeffs[2] * (data[0,0] - data[0,0])**2 \
+ coeffs[3] * (data[0,0] - data[0,0])**3, \
data[0,7]
print coeffs[0] + coeffs[1] * (data[nmid,0] - data[0,0]) \
+ coeffs[2] * (data[nmid,0] - data[0,0])**2 \
+ coeffs[3] * (data[nmid,0] - data[0,0])**3, \
data[nmid,7]
print Tmid,data[-1,0]
spline2 = UnivariateSpline(data[nmid-1:,0],data[nmid-1:,7],s=1,w=w[nmid-1:])
print spline2.get_coeffs()
print spline2.get_knots()
print spline2.get_residual()
plt.plot(data[:,0],spline1(data[:,0]))
plt.plot(data[:,0],spline2(data[:,0]))
plt.savefig('test.png')
And here is the resulting plot. I believe I have valid splines for each interval but it looks like my spline equation is not correct... I can't find any reference to what it is supposed to be in the scipy documentation. Anybody knows? Thanks !
The scipy documentation does not have anything to say about how one can take the coefficients and manually generate the spline curve. However, it is possible to figure out how to do this from the existing literature on B-splines. The following function bspleval shows how to construct the B-spline basis functions (the matrix B in the code), from which one can easily generate the spline curve by multiplying the coefficients with the highest-order basis functions and summing:
def bspleval(x, knots, coeffs, order, debug=False):
'''
Evaluate a B-spline at a set of points.
Parameters
----------
x : list or ndarray
The set of points at which to evaluate the spline.
knots : list or ndarray
The set of knots used to define the spline.
coeffs : list of ndarray
The set of spline coefficients.
order : int
The order of the spline.
Returns
-------
y : ndarray
The value of the spline at each point in x.
'''
k = order
t = knots
m = alen(t)
npts = alen(x)
B = zeros((m-1,k+1,npts))
if debug:
print('k=%i, m=%i, npts=%i' % (k, m, npts))
print('t=', t)
print('coeffs=', coeffs)
## Create the zero-order B-spline basis functions.
for i in range(m-1):
B[i,0,:] = float64(logical_and(x >= t[i], x < t[i+1]))
if (k == 0):
B[m-2,0,-1] = 1.0
## Next iteratively define the higher-order basis functions, working from lower order to higher.
for j in range(1,k+1):
for i in range(m-j-1):
if (t[i+j] - t[i] == 0.0):
first_term = 0.0
else:
first_term = ((x - t[i]) / (t[i+j] - t[i])) * B[i,j-1,:]
if (t[i+j+1] - t[i+1] == 0.0):
second_term = 0.0
else:
second_term = ((t[i+j+1] - x) / (t[i+j+1] - t[i+1])) * B[i+1,j-1,:]
B[i,j,:] = first_term + second_term
B[m-j-2,j,-1] = 1.0
if debug:
plt.figure()
for i in range(m-1):
plt.plot(x, B[i,k,:])
plt.title('B-spline basis functions')
## Evaluate the spline by multiplying the coefficients with the highest-order basis functions.
y = zeros(npts)
for i in range(m-k-1):
y += coeffs[i] * B[i,k,:]
if debug:
plt.figure()
plt.plot(x, y)
plt.title('spline curve')
plt.show()
return(y)
To give an example of how this can be used with Scipy's existing univariate spline functions, the following is an example script. This takes the input data and uses Scipy's functional and also its object-oriented approach to spline fitting. Taking the coefficients and knot points from either of the two and using these as inputs to our manually-calculated routine bspleval, we reproduce the same curve that they do. Note that the difference between the manually evaluated curve and Scipy's evaluation method is so small that it is almost certainly floating-point noise.
x = array([-273.0, -176.4, -79.8, 16.9, 113.5, 210.1, 306.8, 403.4, 500.0])
y = array([2.25927498e-53, 2.56028619e-03, 8.64512988e-01, 6.27456769e+00, 1.73894734e+01,
3.29052124e+01, 5.14612316e+01, 7.20531200e+01, 9.40718450e+01])
x_nodes = array([-273.0, -263.5, -234.8, -187.1, -120.3, -34.4, 70.6, 194.6, 337.8, 500.0])
y_nodes = array([2.25927498e-53, 3.83520726e-46, 8.46685318e-11, 6.10568083e-04, 1.82380809e-01,
2.66344008e+00, 1.18164677e+01, 3.01811501e+01, 5.78812583e+01, 9.40718450e+01])
## Now get scipy's spline fit.
k = 3
tck = splrep(x_nodes, y_nodes, k=k, s=0)
knots = tck[0]
coeffs = tck[1]
print('knot points=', knots)
print('coefficients=', coeffs)
## Now try scipy's object-oriented version. The result is exactly the same as "tck": the knots are the
## same and the coeffs are the same, they are just queried in a different way.
uspline = UnivariateSpline(x_nodes, y_nodes, s=0)
uspline_knots = uspline.get_knots()
uspline_coeffs = uspline.get_coeffs()
## Here are scipy's native spline evaluation methods. Again, "ytck" and "y_uspline" are exactly equal.
ytck = splev(x, tck)
y_uspline = uspline(x)
y_knots = uspline(knots)
## Now let's try our manually-calculated evaluation function.
y_eval = bspleval(x, knots, coeffs, k, debug=False)
plt.plot(x, ytck, label='tck')
plt.plot(x, y_uspline, label='uspline')
plt.plot(x, y_eval, label='manual')
## Next plot the knots and nodes.
plt.plot(x_nodes, y_nodes, 'ko', markersize=7, label='input nodes') ## nodes
plt.plot(knots, y_knots, 'mo', markersize=5, label='tck knots') ## knots
plt.xlim((-300.0,530.0))
plt.legend(loc='best', prop={'size':14})
plt.figure()
plt.title('difference')
plt.plot(x, ytck-y_uspline, label='tck-uspl')
plt.plot(x, ytck-y_eval, label='tck-manual')
plt.legend(loc='best', prop={'size':14})
plt.show()
The coefficients given by get_coeffs are B-spline (Basis spline) coefficients, described here: B-spline (Wikipedia)
Probably whatever other program/language you will be using has an implementation. Supply the knot locations and coefficients, and you should be all set.