Efficient re-gridding of large 2-D arrays in python - python

I am trying to find the most efficient way to regrid a portion of a 2-D array to a finer grid. To keep my program general, I would prefer a solution using a standard package like numpy or scipy, but perhaps this is beyond their capacities.
The intended input data is a geotiff DEM (digital elevation models) file, imported with GDAL, and converted to a numpy array. An issue is that many of the input files have no CRS information.
In my MWE below and regridding to a 2x2 grid (for demo) this takes 96s on my local machine. The actual fine grids will be much larger and the fine grid will be created multiple times in a loop. I admit this MWE might be the least efficient way to do this!
import numpy as np
import scipy.interpolate as interp
import time
nx,ny = 1600,2400 # size of input data
z_in = np.random.normal(size=(nx,ny)) # make input data
x_in,y_in = np.mgrid[0:nx,0:ny]*0.0932248 # make x,y coordinate arrays for input data
# make new (finer) x,y coordinate arrays. none of these x,y coordinates
# necessarily overlap directly with those of the input dataset.
nx_new,ny_new = 2,2 # size of fine array (in reality, this will be much larger (up to 1000x1000))
x_new,y_new = np.mgrid[0:nx_new,0:ny_new]
x_new = x_new*0.01 + 60
y_new = y_new*0.01 + 85
# regrid the data
starttime=time.time()
flattened_xy_in = np.array([x_in.flatten(),y_in.flatten()]).transpose()
fine_DEM_z = interp.griddata(flattened_xy_in, z_in.flatten(), (x_new, y_new), method='cubic')
print('that took '+str(time.time()-starttime)+' seconds...')

Your input is data that lies on a regular rectangular grid, therefore you're wasting resources by using griddata which can be used to interpolate unstructured data. Instead it would make sense to use RectBivariateSpline which can speed up the interpolation massively if you already have data on a grid.

Related

2D interpolate list of many points [duplicate]

So, I have three numpy arrays which store latitude, longitude, and some property value on a grid -- that is, I have LAT(y,x), LON(y,x), and, say temperature T(y,x), for some limits of x and y. The grid isn't necessarily regular -- in fact, it's tripolar.
I then want to interpolate these property (temperature) values onto a bunch of different lat/lon points (stored as lat1(t), lon1(t), for about 10,000 t...) which do not fall on the actual grid points. I've tried matplotlib.mlab.griddata, but that takes far too long (it's not really designed for what I'm doing, after all). I've also tried scipy.interpolate.interp2d, but I get a MemoryError (my grids are about 400x400).
Is there any sort of slick, preferably fast way of doing this? I can't help but think the answer is something obvious... Thanks!!
Try the combination of inverse-distance weighting and
scipy.spatial.KDTree
described in SO
inverse-distance-weighted-idw-interpolation-with-python.
Kd-trees
work nicely in 2d 3d ..., inverse-distance weighting is smooth and local,
and the k= number of nearest neighbours can be varied to tradeoff speed / accuracy.
There is a nice inverse distance example by Roger Veciana i Rovira along with some code using GDAL to write to geotiff if you're into that.
This is of coarse to a regular grid, but assuming you project the data first to a pixel grid with pyproj or something, all the while being careful what projection is used for your data.
A copy of his algorithm and example script:
from math import pow
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
def pointValue(x,y,power,smoothing,xv,yv,values):
nominator=0
denominator=0
for i in range(0,len(values)):
dist = sqrt((x-xv[i])*(x-xv[i])+(y-yv[i])*(y-yv[i])+smoothing*smoothing);
#If the point is really close to one of the data points, return the data point value to avoid singularities
if(dist<0.0000000001):
return values[i]
nominator=nominator+(values[i]/pow(dist,power))
denominator=denominator+(1/pow(dist,power))
#Return NODATA if the denominator is zero
if denominator > 0:
value = nominator/denominator
else:
value = -9999
return value
def invDist(xv,yv,values,xsize=100,ysize=100,power=2,smoothing=0):
valuesGrid = np.zeros((ysize,xsize))
for x in range(0,xsize):
for y in range(0,ysize):
valuesGrid[y][x] = pointValue(x,y,power,smoothing,xv,yv,values)
return valuesGrid
if __name__ == "__main__":
power=1
smoothing=20
#Creating some data, with each coodinate and the values stored in separated lists
xv = [10,60,40,70,10,50,20,70,30,60]
yv = [10,20,30,30,40,50,60,70,80,90]
values = [1,2,2,3,4,6,7,7,8,10]
#Creating the output grid (100x100, in the example)
ti = np.linspace(0, 100, 100)
XI, YI = np.meshgrid(ti, ti)
#Creating the interpolation function and populating the output matrix value
ZI = invDist(xv,yv,values,100,100,power,smoothing)
# Plotting the result
n = plt.normalize(0.0, 100.0)
plt.subplot(1, 1, 1)
plt.pcolor(XI, YI, ZI)
plt.scatter(xv, yv, 100, values)
plt.title('Inv dist interpolation - power: ' + str(power) + ' smoothing: ' + str(smoothing))
plt.xlim(0, 100)
plt.ylim(0, 100)
plt.colorbar()
plt.show()
There's a bunch of options here, which one is best will depend on your data...
However I don't know of an out-of-the-box solution for you
You say your input data is from tripolar data. There are three main cases for how this data could be structured.
Sampled from a 3d grid in tripolar space, projected back to 2d LAT, LON data.
Sampled from a 2d grid in tripolar space, projected into 2d LAT LON data.
Unstructured data in tripolar space projected into 2d LAT LON data
The easiest of these is 2. Instead of interpolating in LAT LON space, "just" transform your point back into the source space and interpolate there.
Another option that works for 1 and 2 is to search for the cells that maps from tripolar space to cover your sample point. (You can use a BSP or grid type structure to speed up this search) Pick one of the cells, and interpolate inside it.
Finally there's a heap of unstructured interpolation options .. but they tend to be slow.
A personal favourite of mine is to use a linear interpolation of the nearest N points, finding those N points can again be done with gridding or a BSP. Another good option is to Delauney triangulate the unstructured points and interpolate on the resulting triangular mesh.
Personally if my mesh was case 1, I'd use an unstructured strategy as I'd be worried about having to handle searching through cells with overlapping projections. Choosing the "right" cell would be difficult.
I suggest you taking a look at GRASS (an open source GIS package) interpolation features (http://grass.ibiblio.org/gdp/html_grass62/v.surf.bspline.html). It's not in python but you can reimplement it or interface with C code.
Am I right in thinking your data grids look something like this (red is the old data, blue is the new interpolated data)?
alt text http://www.geekops.co.uk/photos/0000-00-02%20%28Forum%20images%29/DataSeparation.png
This might be a slightly brute-force-ish approach, but what about rendering your existing data as a bitmap (opengl will do simple interpolation of colours for you with the right options configured and you could render the data as triangles which should be fairly fast). You could then sample pixels at the locations of the new points.
Alternatively, you could sort your first set of points spatially and then find the closest old points surrounding your new point and interpolate based on the distances to those points.
There is a FORTRAN library called BIVAR, which is very suitable for this problem. With a few modifications you can make it usable in python using f2py.
From the description:
BIVAR is a FORTRAN90 library which interpolates scattered bivariate data, by Hiroshi Akima.
BIVAR accepts a set of (X,Y) data points scattered in 2D, with associated Z data values, and is able to construct a smooth interpolation function Z(X,Y), which agrees with the given data, and can be evaluated at other points in the plane.

How to plot an array as if the indices i,j were the x,y coordinates?

Hi guys first question here, looked for an answer but could not find anything, I will try to give it my best.
I am currently working on a problem in the field of Computational Physics and I am solving the Navier-Stokes equations numerically using the Finite Difference Method. It`s my first time working with Python (using a Google Colaboratory notebook with Python 3). I am solving the equations for a grid of points in a two-dimensional plane. I created this grid using np.arrays
import numpy as np
import matplotlib.pyplot as plt
N = 10
data = np.zeros((N,N))
and then manipulating it. For example
for i in range(N):
for j in range(N):
data[i,j] = i
which makes the values of the array increase with index i. However, if I plot my data-array now using
x = np.arange(N)
y = np.arange(N)
plt.contourf(x, y, data)
plt.colorbar()
The result of the example:
It shows that the plotted data increases along the y-axis even though my manipulation of the array should make it increase along the x-axis.
I noticed this happens because the indexing of arrays (i,j) is different from the standard orientation of x- and y-axis, but how can I plot my data-array as if i=x and j=y?
You can use numpy's ndindex function to get the indices based on shape and then unzip the result.
x,y=list(zip(*np.ndindex((N,N))))
The data is row by column and can be obtained with meshgrid. If you're interested in the same manipulation. You can make the data with meshgrid as
dx,dy=np.meshgrid(np.arange(N),np.arange(N))
And then plot the dy to get variation in the x axis.

Python 4D linear interpolation on a rectangular grid

I need to interpolate temperature data linearly in 4 dimensions (latitude, longitude, altitude and time).
The number of points is fairly high (360x720x50x8) and I need a fast method of computing the temperature at any point in space and time within the data bounds.
I have tried using scipy.interpolate.LinearNDInterpolator but using Qhull for triangulation is inefficient on a rectangular grid and takes hours to complete.
By reading this SciPy ticket, the solution seemed to be implementing a new nd interpolator using the standard interp1d to calculate a higher number of data points, and then use a "nearest neighbor" approach with the new dataset.
This, however, takes a long time again (minutes).
Is there a quick way of interpolating data on a rectangular grid in 4 dimensions without it taking minutes to accomplish?
I thought of using interp1d 4 times without calculating a higher density of points, but leaving it for the user to call with the coordinates, but I can't get my head around how to do this.
Otherwise would writing my own 4D interpolator specific to my needs be an option here?
Here's the code I've been using to test this:
Using scipy.interpolate.LinearNDInterpolator:
import numpy as np
from scipy.interpolate import LinearNDInterpolator
lats = np.arange(-90,90.5,0.5)
lons = np.arange(-180,180,0.5)
alts = np.arange(1,1000,21.717)
time = np.arange(8)
data = np.random.rand(len(lats)*len(lons)*len(alts)*len(time)).reshape((len(lats),len(lons),len(alts),len(time)))
coords = np.zeros((len(lats),len(lons),len(alts),len(time),4))
coords[...,0] = lats.reshape((len(lats),1,1,1))
coords[...,1] = lons.reshape((1,len(lons),1,1))
coords[...,2] = alts.reshape((1,1,len(alts),1))
coords[...,3] = time.reshape((1,1,1,len(time)))
coords = coords.reshape((data.size,4))
interpolatedData = LinearNDInterpolator(coords,data)
Using scipy.interpolate.interp1d:
import numpy as np
from scipy.interpolate import LinearNDInterpolator
lats = np.arange(-90,90.5,0.5)
lons = np.arange(-180,180,0.5)
alts = np.arange(1,1000,21.717)
time = np.arange(8)
data = np.random.rand(len(lats)*len(lons)*len(alts)*len(time)).reshape((len(lats),len(lons),len(alts),len(time)))
interpolatedData = np.array([None, None, None, None])
interpolatedData[0] = interp1d(lats,data,axis=0)
interpolatedData[1] = interp1d(lons,data,axis=1)
interpolatedData[2] = interp1d(alts,data,axis=2)
interpolatedData[3] = interp1d(time,data,axis=3)
Thank you very much for your help!
In the same ticket you have linked, there is an example implementation of what they call tensor product interpolation, showing the proper way to nest recursive calls to interp1d. This is equivalent to quadrilinear interpolation if you choose the default kind='linear' parameter for your interp1d's.
While this may be good enough, this is not linear interpolation, and there will be higher order terms in the interpolation function, as this image from the wikipedia entry on bilinear interpolation shows:
This may very well be good enough for what you are after, but there are applications where a triangulated, really piecewise linear, interpoaltion is preferred. If you really need this, there is an easy way of working around the slowness of qhull.
Once LinearNDInterpolator has been setup, there are two steps to coming up with an interpolated value for a given point:
figure out inside which triangle (4D hypertetrahedron in your case) the point is, and
interpolate using the barycentric coordinates of the point relative to the vertices as weights.
You probably do not want to mess with barycentric coordinates, so better leave that to LinearNDInterpolator. But you do know some things about the triangulation. Mostly that, because you have a regular grid, within each hypercube the triangulation is going to be the same. So to interpolate a single value, you could first determine in which subcube your point is, build a LinearNDInterpolator with the 16 vertices of that cube, and use it to interpolate your value:
from itertools import product
def interpolator(coords, data, point) :
dims = len(point)
indices = []
sub_coords = []
for j in xrange(dims) :
idx = np.digitize([point[j]], coords[j])[0]
indices += [[idx - 1, idx]]
sub_coords += [coords[j][indices[-1]]]
indices = np.array([j for j in product(*indices)])
sub_coords = np.array([j for j in product(*sub_coords)])
sub_data = data[list(np.swapaxes(indices, 0, 1))]
li = LinearNDInterpolator(sub_coords, sub_data)
return li([point])[0]
>>> point = np.array([12.3,-4.2, 500.5, 2.5])
>>> interpolator((lats, lons, alts, time), data, point)
0.386082399091
This cannot work on vectorized data, since that would require storing a LinearNDInterpolator for every possible subcube, and even though it probably would be faster than triangulating the whole thing, it would still be very slow.
scipy.ndimage.map_coordinates
is a nice fast interpolator for uniform grids (all boxes the same size).
See multivariate-spline-interpolation-in-python-scipy on SO
for a clear description.
For non-uniform rectangular grids, a simple wrapper
Intergrid maps / scales non-uniform to uniform grids,
then does map_coordinates.
On a 4d test case like yours it takes about 1 μsec per query:
Intergrid: 1000000 points in a (361, 720, 47, 8) grid took 652 msec
For very similar things I use Scientific.Functions.Interpolation.InterpolatingFunction.
import numpy as np
from Scientific.Functions.Interpolation import InterpolatingFunction
lats = np.arange(-90,90.5,0.5)
lons = np.arange(-180,180,0.5)
alts = np.arange(1,1000,21.717)
time = np.arange(8)
data = np.random.rand(len(lats)*len(lons)*len(alts)*len(time)).reshape((len(lats),len(lons),len(alts),len(time)))
axes = (lats, lons, alts, time)
f = InterpolatingFunction(axes, data)
You can now leave it to the user to call the InterpolatingFunction with coordinates:
>>> f(0,0,10,3)
0.7085675631375401
InterpolatingFunction has nice additional features, such as integration and slicing.
However, I do not know for sure whether the interpolation is linear. You would have to look in the module source to find out.
I can not open this address, and find enough informations about this package

Interpolate large irregular grid onto another irregular grid in Python

I am trying to interpolate complex values from one irregular grid to another irregular grid using Python. The grids are in 2D and there are 103,113 data points. I am using Python 2.6.6, Scipy 0.7.2, Numpy 1.3.0, Matplotlib 0.99.3
In Matlab using griddata this is achieved in roughly 5 seconds.
BnGRID2 = griddata(R_GRID1,Z_GRID1,BnGRID1,R_GRID2,Z_GRID2) (MATLAB)
(Note all arrays are 201 x 513)
However, if I try using matplotlib.mlab.griddata I get a memoryError even if I try to work with the real part only:
mlab.griddata(R_GRID1.flatten(),Z_GRID1.flatten(),num.real(BnGRID1.flatten()),R_GRID2.flatten(),Z_GRID2.flatten())
If I try using interp2d I get a segmentation fault and Python exits:
a = interp.interp2d(R_GRID1,Z_GRID1,num.real(BnGRID1))
I have tried using KDTree and this seems to work ok, however, it takes a few minutes compared with the few seconds for Matlab, but I haven't explored this option too much yet.
Was wondering if anyone has any ideas how I can get this done as quickly as Matlab seems to? I noticed that the newer version of Scipy also has griddata, does anyone know if this can handle large irregular grids?
Scipy's griddata seems to be able to deal with data sets of this size without problems:
import numpy as np
import scipy.interpolate
# old grid
x, y = np.mgrid[0:1:201j, 0:1:513j]
z = np.sin(x*20) * (1j + np.cos(y*3))**2 # some data
# new grid
x2, y2 = np.mgrid[0.1:0.9:201j, 0.1:0.9:513j]
# interpolate onto the new grid
z2 = scipy.interpolate.griddata((x.ravel(), y.ravel()), z.ravel(), (x2, y2), method='cubic')
The griddata step takes about 5s on an old AMD Athlon.
If your data is on a grid (i.e., the coordinates corresponding to value z[i,j] are (x[i], y[j])), you can get more speed by using scipy.interpolate.RectBivariateSpline
z3 = (scipy.interpolate.RectBivariateSpline(x[:,0], y[0,:], z.real)(x2[:,0], y2[0,:])
+ 1j*scipy.interpolate.RectBivariateSpline(x[:,0], y[0,:], z.imag)(x2[:,0], y2[0,:]))
which takes 0.05s. It's much faster, because even if your grid spacings are irregular, a more efficient algorithm can be used as long as the grid is rectangular.

Multivariate spline interpolation in python/scipy?

Is there a library module or other straightforward way to implement multivariate spline interpolation in python?
Specifically, I have a set of scalar data on a regularly-spaced three-dimensional grid which I need to interpolate at a small number of points scattered throughout the domain. For two dimensions, I have been using scipy.interpolate.RectBivariateSpline, and I'm essentially looking for an extension of that to three-dimensional data.
The N-dimensional interpolation routines I have found are not quite good enough: I would prefer splines over LinearNDInterpolator for smoothness, and I have far too many data points (often over one million) for, e.g., a radial basis function to work.
If anyone knows of a python library that can do this, or perhaps one in another language that I could call or port, I'd really appreciate it.
If I'm understanding your question correctly, your input "observation" data is regularly gridded?
If so, scipy.ndimage.map_coordinates does exactly what you want.
It's a bit hard to understand at first pass, but essentially, you just feed it a sequence of coordinates that you want to interpolate the values of the grid at in pixel/voxel/n-dimensional-index coordinates.
As a 2D example:
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
# Note that the output interpolated coords will be the same dtype as your input
# data. If we have an array of ints, and we want floating point precision in
# the output interpolated points, we need to cast the array as floats
data = np.arange(40).reshape((8,5)).astype(np.float)
# I'm writing these as row, column pairs for clarity...
coords = np.array([[1.2, 3.5], [6.7, 2.5], [7.9, 3.5], [3.5, 3.5]])
# However, map_coordinates expects the transpose of this
coords = coords.T
# The "mode" kwarg here just controls how the boundaries are treated
# mode='nearest' is _not_ nearest neighbor interpolation, it just uses the
# value of the nearest cell if the point lies outside the grid. The default is
# to treat the values outside the grid as zero, which can cause some edge
# effects if you're interpolating points near the edge
# The "order" kwarg controls the order of the splines used. The default is
# cubic splines, order=3
zi = ndimage.map_coordinates(data, coords, order=3, mode='nearest')
row, column = coords
nrows, ncols = data.shape
im = plt.imshow(data, interpolation='nearest', extent=[0, ncols, nrows, 0])
plt.colorbar(im)
plt.scatter(column, row, c=zi, vmin=data.min(), vmax=data.max())
for r, c, z in zip(row, column, zi):
plt.annotate('%0.3f' % z, (c,r), xytext=(-10,10), textcoords='offset points',
arrowprops=dict(arrowstyle='->'), ha='right')
plt.show()
To do this in n-dimensions, we just need to pass in the appropriate sized arrays:
import numpy as np
from scipy import ndimage
data = np.arange(3*5*9).reshape((3,5,9)).astype(np.float)
coords = np.array([[1.2, 3.5, 7.8], [0.5, 0.5, 6.8]])
zi = ndimage.map_coordinates(data, coords.T)
As far as scaling and memory usage goes, map_coordinates will create a filtered copy of the array if you're using an order > 1 (i.e. not linear interpolation). If you just want to interpolate at a very small number of points, this is a rather large overhead. It doesn't increase with the number points you want to interpolate at, however. As long as have enough RAM for a single temporary copy of your input data array, you'll be fine.
If you can't store a copy of your data in memory, you can either a) specify prefilter=False and order=1 and use linear interpolation, or b) replace your original data with a filtered version using ndimage.spline_filter, and then call map_coordinates with prefilter=False.
Even if you have enough ram, keeping the filtered dataset around can be a big speedup if you need to call map_coordinates multiple times (e.g. interactive use, etc).
Smooth spline interpolation in dim > 2 is difficult to implement, and so there are not many freely available libraries able to do that (in fact, I don't know any).
You can try inverse distance weighted interpolation, see: Inverse Distance Weighted (IDW) Interpolation with Python .
This should produce reasonably smooth results, and scale better than RBF to larger data sets.

Categories