Fast interpolation of grid data - python

I have a large 3d np.ndarray of data that represents a physical variable sampled over a volume in a regular grid fashion (as in the value in array[0,0,0] represents the value at physical coords (0,0,0)).
I would like to go to a finer grid spacing by interpolating the data in the rough grid. At the moment I'm using scipy griddata linear interpolation but it's pretty slow (~90secs for 20x20x20 array). It's a bit overengineered for my purposes, allowing random sampling of the volume data. Is there anything out there that can take advantage of my regularly spaced data and the fact that there is only a limited set of specific points I want to interpolate to?

Sure! There are two options that do different things but both exploit the regularly-gridded nature of the original data.
The first is scipy.ndimage.zoom. If you just want to produce a denser regular grid based on interpolating the original data, this is the way to go.
The second is scipy.ndimage.map_coordinates. If you'd like to interpolate a few (or many) arbitrary points in your data, but still exploit the regularly-gridded nature of the original data (e.g. no quadtree required), it's the way to go.
"Zooming" an array (scipy.ndimage.zoom)
As a quick example (This will use cubic interpolation. Use order=1 for bilinear, order=0 for nearest, etc.):
import numpy as np
import scipy.ndimage as ndimage
data = np.arange(9).reshape(3,3)
print 'Original:\n', data
print 'Zoomed by 2x:\n', ndimage.zoom(data, 2)
This yields:
Original:
[[0 1 2]
[3 4 5]
[6 7 8]]
Zoomed by 2x:
[[0 0 1 1 2 2]
[1 1 1 2 2 3]
[2 2 3 3 4 4]
[4 4 5 5 6 6]
[5 6 6 7 7 7]
[6 6 7 7 8 8]]
This also works for 3D (and nD) arrays. However, be aware that if you zoom by 2x, for example, you'll zoom along all axes.
data = np.arange(27).reshape(3,3,3)
print 'Original:\n', data
print 'Zoomed by 2x gives an array of shape:', ndimage.zoom(data, 2).shape
This yields:
Original:
[[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]]
[[ 9 10 11]
[12 13 14]
[15 16 17]]
[[18 19 20]
[21 22 23]
[24 25 26]]]
Zoomed by 2x gives an array of shape: (6, 6, 6)
If you have something like a 3-band, RGB image that you'd like to zoom, you can do this by specifying a sequence of tuples as the zoom factor:
print 'Zoomed by 2x along the last two axes:'
print ndimage.zoom(data, (1, 2, 2))
This yields:
Zoomed by 2x along the last two axes:
[[[ 0 0 1 1 2 2]
[ 1 1 1 2 2 3]
[ 2 2 3 3 4 4]
[ 4 4 5 5 6 6]
[ 5 6 6 7 7 7]
[ 6 6 7 7 8 8]]
[[ 9 9 10 10 11 11]
[10 10 10 11 11 12]
[11 11 12 12 13 13]
[13 13 14 14 15 15]
[14 15 15 16 16 16]
[15 15 16 16 17 17]]
[[18 18 19 19 20 20]
[19 19 19 20 20 21]
[20 20 21 21 22 22]
[22 22 23 23 24 24]
[23 24 24 25 25 25]
[24 24 25 25 26 26]]]
Arbitrary interpolation of regularly-gridded data using map_coordinates
The first thing to undersand about map_coordinates is that it operates in pixel coordinates (e.g. just like you'd index the array, but the values can be floats). From your description, this is exactly what you want, but if often confuses people. For example, if you have x, y, z "real-world" coordinates, you'll need to transform them to index-based "pixel" coordinates.
At any rate, let's say we wanted to interpolate the value in the original array at position 1.2, 0.3, 1.4.
If you're thinking of this in terms of the earlier RGB image case, the first coordinate corresponds to the "band", the second to the "row" and the last to the "column". What order corresponds to what depends entirely on how you decide to structure your data, but I'm going to use these as "z, y, x" coordinates, as it makes the comparison to the printed array easier to visualize.
import numpy as np
import scipy.ndimage as ndimage
data = np.arange(27).reshape(3,3,3)
print 'Original:\n', data
print 'Sampled at 1.2, 0.3, 1.4:'
print ndimage.map_coordinates(data, [[1.2], [0.3], [1.4]])
This yields:
Original:
[[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]]
[[ 9 10 11]
[12 13 14]
[15 16 17]]
[[18 19 20]
[21 22 23]
[24 25 26]]]
Sampled at 1.2, 0.3, 1.4:
[14]
Once again, this is cubic interpolation by default. Use the order kwarg to control the type of interpolation.
It's worth noting here that all of scipy.ndimage's operations preserve the dtype of the original array. If you want floating point results, you'll need to cast the original array as a float:
In [74]: ndimage.map_coordinates(data.astype(float), [[1.2], [0.3], [1.4]])
Out[74]: array([ 13.5965])
Another thing you may notice is that the interpolated coordinates format is rather cumbersome for a single point (e.g. it expects a 3xN array instead of an Nx3 array). However, it's arguably nicer when you have sequences of coordinate. For example, consider the case of sampling along a line that passes through the "cube" of data:
xi = np.linspace(0, 2, 10)
yi = 0.8 * xi
zi = 1.2 * xi
print ndimage.map_coordinates(data, [zi, yi, xi])
This yields:
[ 0 1 4 8 12 17 21 24 0 0]
This is also a good place to mention how boundary conditions are handled. By default, anything outside of the array is set to 0. Thus the last two values in the sequence are 0. (i.e. zi is > 2 for the last two elements).
If we wanted the points outside the array to be, say -999 (We can't use nan as this is an integer array. If you want nan, you'll need to cast to floats.):
In [75]: ndimage.map_coordinates(data, [zi, yi, xi], cval=-999)
Out[75]: array([ 0, 1, 4, 8, 12, 17, 21, 24, -999, -999])
If we wanted it to return the nearest value for points outside the array, we'd do:
In [76]: ndimage.map_coordinates(data, [zi, yi, xi], mode='nearest')
Out[76]: array([ 0, 1, 4, 8, 12, 17, 21, 24, 25, 25])
You can also use "reflect" and "wrap" as boundary modes, in addition to "nearest" and the default "constant". These are fairly self-explanatory, but try experimenting a bit if you're confused.
For example, let's interpolate a line along the first row of the first band in the array that extends for twice the distance of the array:
xi = np.linspace(0, 5, 10)
yi, zi = np.zeros_like(xi), np.zeros_like(xi)
The default give:
In [77]: ndimage.map_coordinates(data, [zi, yi, xi])
Out[77]: array([0, 0, 1, 2, 0, 0, 0, 0, 0, 0])
Compare this to:
In [78]: ndimage.map_coordinates(data, [zi, yi, xi], mode='reflect')
Out[78]: array([0, 0, 1, 2, 2, 1, 2, 1, 0, 0])
In [78]: ndimage.map_coordinates(data, [zi, yi, xi], mode='wrap')
Out[78]: array([0, 0, 1, 2, 0, 1, 1, 2, 0, 1])
Hopefully that clarifies things a bit!

Great answer by Joe. Based on his suggestion, I created the regulargrid package (https://pypi.python.org/pypi/regulargrid/, source at https://github.com/JohannesBuchner/regulargrid)
It provides support for n-dimensional Cartesian grids (as needed here) via the very fast scipy.ndimage.map_coordinates for arbitrary coordinate scales.

Related

Numpy array add inplace, values not summed when select same row multiple times

suppose I have a 2x2 matrix, I want to select a few rows and add inplace with another array of the correct shape. The problem is, when a row is selected multiple times, the values from another array is not summed:
Example:
I have a 2x2 matrix:
>>> import numpy as np
>>> x = np.arange(15).reshape((5,3))
>>> print(x)
[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]
[ 9 10 11]
[12 13 14]]
I want to select a few rows, and add values:
>>> x[np.array([[1,1],[2,3]])] # row 1 is selected twice
[[[ 3 4 5]
[ 3 4 5]]
[[ 6 7 8]
[ 9 10 11]]]
>>> add_value = np.random.randint(0,10,(2,2,3))
[[[6 1 2] # add to row 1
[9 8 5]] # add to row 1 again!
[[5 0 5] # add to row 2
[1 9 3]]] # add to row 3
>>> x[np.array([[1,1],[2,3]])] += add_value
>>> print(x)
[[ 0 1 2]
[12 12 10] # [12,12,10]=[3,4,5]+[9,8,5]
[11 7 13]
[10 19 14]
[12 13 14]]
as above, the first row is [12,12,10], which means [9,8,5] and [6,1,2] is not summed when added onto the first row. Are there any solutions? Thanks!
This behavior is described in the numpy documentation, near the bottom of this page, under "assigning values to indexed arrays":
https://numpy.org/doc/stable/user/basics.indexing.html#basics-indexing
Quoting:
Unlike some of the references (such as array and mask indices) assignments are always made to the original data in the array (indeed, nothing else would make sense!). Note though, that some actions may not work as one may naively expect. This particular example is often surprising to people:
>>> x = np.arange(0, 50, 10)
>>> x
array([ 0, 10, 20, 30, 40])
>>> x[np.array([1, 1, 3, 1])] += 1
>>> x
array([ 0, 11, 20, 31, 40])
Where people expect that the 1st location will be incremented by 3. In fact, it will only be incremented by 1. The reason is that a new array is extracted from the original (as a temporary) containing the values at 1, 1, 3, 1, then the value 1 is added to the temporary, and then the temporary is assigned back to the original array. Thus the value of the array at x[1] + 1 is assigned to x[1] three times, rather than being incremented 3 times.
Just wanna share what #hpaulj suggests that uses np.add.at:
>>> import numpy as np
>>> x = np.arange(15).reshape((5,3))
>>> select = np.array([[1,1],[2,3]])
>>> add_value = np.array([[[6,1,2],[9,8,5]],[[5,0,5],[1,9,3]]])
>>> np.add.at(x, select.flatten(), add_value.reshape(-1, add_value.shape[-1]))
[[ 0 1 2]
[18 13 12]
[11 7 13]
[10 19 14]
[12 13 14]]
Now the first row is [18,13,12] which is the sum of [3,4,5], [6,1,2] and [9,8,5]

Convert c-order index into f-order index in Python

I am trying to find a solution to the following problem. I have an index in C-order and I need to convert it into F-order.
To explain simply my problem, here is an example:
Let's say we have a matrix x as:
x = np.arange(1,5).reshape(2,2)
print(x)
array([[1, 2],
[3, 4]])
Then the flattened matrix in C order is:
flat_c = x.ravel()
print(flat_c)
array([1, 2, 3, 4])
Now, the value 3 is at the 2nd position of the flat_c vector i.e. flat_c[2] is 3.
If I would flatten the matrix x using the F-order, I would have:
flat_f = x.ravel(order='f')
array([1, 3, 2, 4])
Now, the value 3 is at the 1st position of the flat_f vector i.e. flat_f[1] is 3.
I am trying to find a way to get the F-order index knowing the dimension of the matrix and the corresponding index in C-order.
I tried using np.unravel_index but this function returns the matrix positions...
We can use a combination of np.ravel_multi_index and np.unravel_index for a ndarray supported solution. Hence, given array shape s of input array a and c-order index c_idx, it would be -
s = a.shape
f_idx = np.ravel_multi_index(np.unravel_index(c_idx,s)[::-1],s[::-1])
So, the idea is pretty simple. Use np.unravel_index to get c-based indices in n-dim, then get flattened-linear index in fortran order by using np.ravel_multi_index on flipped shape and those flipped n-dim indices to simulate fortran behavior.
Sample runs on 2D -
In [321]: a
Out[321]:
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
In [322]: s = a.shape
In [323]: c_idx = 6
In [324]: np.ravel_multi_index(np.unravel_index(c_idx,s)[::-1],s[::-1])
Out[324]: 4
In [325]: c_idx = 12
In [326]: np.ravel_multi_index(np.unravel_index(c_idx,s)[::-1],s[::-1])
Out[326]: 8
Sample run on 3D array -
In [336]: a
Out[336]:
array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]],
[[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]]])
In [337]: s = a.shape
In [338]: c_idx = 21
In [339]: np.ravel_multi_index(np.unravel_index(c_idx,s)[::-1],s[::-1])
Out[339]: 9
In [340]: a.ravel('F')[9]
Out[340]: 21
Suppose your matrix is of shape (nrow,ncol). Then the 1D index when unraveled in C style for the (irow,icol) entry is given by
idxc = ncol*irow + icol
In the above equation, you know idxc. Then,
icol = idxc % ncol
Now you can find irow
irow = (idxc - icol) / ncol
Now you know both irow and icol. You can use them to get the F index. I think the F index will be given by
idxf = nrow*icol + irow
Please double-check my math, I might have got something wrong...
For the 3D case, if your array has dimensions [n1][n2][n3], then the unraveled C-index for [i1][i2][i3] is
idxc = n2*n3*i1 + n3*i2+i3
Using modulo operations similar to the 2D case, we can recover i1,i2,i3 and then convert to unraveled F index, i.e.
n3*i2 + i3 = idxc % (n2*n3)
i3 = (n3*i2+i3) % n3
i2 = ((n3*i2+i3) - i3) /n3
i1 = (idxc-(n3+i2+i3)) /(n2*n3)
F index would be:
idxf = i1 + n1*i2 +n1*n2*i3
Please check my math.
In simple cases you may also get away with transposing and ravelling the array:
import numpy as np
x = np.arange(2 * 2).reshape(2, 2)
print(x)
# [[0 1]
# [2 3]]
print(x.ravel())
# [0 1 2 3]
print(x.transpose().ravel())
# [0 2 1 3]
x = np.arange(2 * 3 * 4).reshape(2, 3, 4)
print(x)
# [[[ 0 1 2 3]
# [ 4 5 6 7]
# [ 8 9 10 11]]
# [[12 13 14 15]
# [16 17 18 19]
# [20 21 22 23]]]
print(x.ravel())
# [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23]
print(x.transpose().ravel())
# [ 0 12 4 16 8 20 1 13 5 17 9 21 2 14 6 18 10 22 3 15 7 19 11 23]

3D slice from 3D array

I have a large 3D, N x N x N, numpy array with a value at each index in the array.
I want to be able to take cubic slices from the array using a center point:
def take_slice(large_array, center_point):
...
return cubic_slice_from_center
To illustrate, I want the cubic_slice_from_center to come back with the following shape, where slice[1][1][1] would be the value of the center point used to generate the slice:
print(cubic_slice_from_center)
array([[[0.32992015, 0.30037145, 0.04947877],
[0.0158681 , 0.26743224, 0.49967057],
[0.04274621, 0.0738851 , 0.60360489]],
[[0.78985965, 0.16111745, 0.51665212],
[0.08491344, 0.30240689, 0.23544363],
[0.47282742, 0.5777977 , 0.92652398]],
[[0.78797628, 0.98634545, 0.17903971],
[0.76787071, 0.29689657, 0.08112121],
[0.08786254, 0.06319838, 0.27050039]]])
I looked at a couple of ways to do this. One way was the following:
def get_cubic_slice(space, slice_center_x, slice_center_y, slice_center_z):
return space[slice_center_x-1:slice_center_x+2,
slice_center_y-1:slice_center_y+2,
slice_center_z-1:slice_center_z+2]
This works so long as the cubic slice is not on the edge but, if it is on the edge, it returns an empty array!
Sometimes, the center point of the slice will be on the edge of the 3D numpy array. When this occurs, rather than return nothing, I would like to return the values of the slice of cubic space that are within the bounds of the space and, where the slice would be out of bounds, fill the return array with np.nan values.
For example, for a 20 x 20 x 20 space, with indices 0-19 for the x, y and z axes, I would like the get_cubic_slice function to return the following kind of result for the point (0,5,5):
print(get_cubic_slice(space,0,5,5))
array([[[np.nan, np.nan, np.nan],
[np.nan , np.nan, np.nan],
[np.nan, np.nan , np.nan]],
[[0.78985965, 0.16111745, 0.51665212],
[0.08491344, 0.30240689, 0.23544363],
[0.47282742, 0.5777977 , 0.92652398]],
[[0.78797628, 0.98634545, 0.17903971],
[0.76787071, 0.29689657, 0.08112121],
[0.08786254, 0.06319838, 0.27050039]]])
What would be the best way to do this with numpy?
x = np.arange(27).reshape(3,3,3)
[[[ 0 1 2] [ 3 4 5] [ 6 7 8]]
[[ 9 10 11] [12 13 14] [15 16 17]]
[[18 19 20] [21 22 23] [24 25 26]]]
x[1][1][2]
14
x[1:, 0:2, 1:4]
[[[ 0] [ 3]]
[[ 9] [12]]
[[18] [21]]]
This is the way how we can do slicing in 3D array

What does numpy.ix_() function do and what is the output used for?

Below shows the output from numpy.ix_() function. What is the use of the output? It's structure is quite unique.
import numpy as np
>>> gfg = np.ix_([1, 2, 3, 4, 5, 6], [11, 12, 13, 14, 15, 16], [21, 22, 23, 24, 25, 26], [31, 32, 33, 34, 35, 36] )
>>> gfg
(array([[[[1]]],
[[[2]]],
[[[3]]],
[[[4]]],
[[[5]]],
[[[6]]]]),
array([[[[11]],
[[12]],
[[13]],
[[14]],
[[15]],
[[16]]]]),
array([[[[21],
[22],
[23],
[24],
[25],
[26]]]]),
array([[[[31, 32, 33, 34, 35, 36]]]]))
According to numpy doc:
Construct an open mesh from multiple sequences.
This function takes N 1-D sequences and returns N outputs with N dimensions each, such that the shape is 1 in all but one dimension and the dimension with the non-unit shape value cycles through all N dimensions.
Using ix_ one can quickly construct index arrays that will index the cross product. a[np.ix_([1,3],[2,5])] returns the array [[a[1,2] a[1,5]], [a[3,2] a[3,5]]].
numpy.ix_()'s main use is to create an open mesh so that we can use it to select specific indices from an array (specific sub-array). An easy example to understand it is:
Say you have a 2D array of shape (5,5), and you would like to select a sub-array that is constructed by selecting the rows 1 and 3 and columns 0 and 3. You can use np.ix_ to create a (index) mesh so as to be able to select the sub-array as follows in the example below:
a = np.arange(5*5).reshape(5,5)
[[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]
[15 16 17 18 19]
[20 21 22 23 24]]
sub_indices = np.ix_([1,3],[0,3])
(array([[1],
[3]]), array([[0, 3]]))
a[sub_indices]
[[ 5 8]
[15 18]]
which is basically the selected sub-array from a that is in rows array([[1],[3]]) and columns array([[0, 3]]):
col 0 col 3
| |
v v
[[ 0 1 2 3 4]
[ 5 6 7 8 9] <- row 1
[10 11 12 13 14]
[15 16 17 18 19] <- row 3
[20 21 22 23 24]]
Please note in the output of the np.ix_, the N-arrays returned for the N 1-D input indices you feed to np.ix_ are returned in a way that first one is for rows, second one is for columns, third one is for depth and so on. That is why in the above example, array([[1],[3]]) is for rows and array([[0, 3]]) is for columns. Same goes for the example OP provided in the question. The reason behind it is the way numpy uses advanced indexing for multi-dimensional arrays.
It's basically used to create N array mask or arrays of indexes each one referring to a different dimension.
For example if I've a 3d np.ndarray and I want to get only some entries of it I can use numpy.ix to create 3 arrays that will have shapes like (N,1,1) (1,N,1) and (1,1,N) containing the corresponding index for each one of the 3 axis.
Take a look at the examples at numpy documentation page. They're self explanatory.
This function isn't commonly used.
I think it's used in some algebra operations like cross product and it's generalisations.

Subtracting minimum of row from the row

I know that
a - a.min(axis=0)
will subtract the minimum of each column from every element in the column. I want to subtract the minimum in each row from every element in the row. I know that
a.min(axis=1)
specifies the minimum within a row, but how do I tell the subtraction to go by rows instead of columns? (How do I specify the axis of the subtraction?)
edit: For my question, a is a 2d array in NumPy.
Assuming a is a numpy array, you can use this:
new_a = a - np.min(a, axis=1)[:,None]
Try it out:
import numpy as np
a = np.arange(24).reshape((4,6))
print (a)
new_a = a - np.min(a, axis=1)[:,None]
print (new_a)
Result:
[[ 0 1 2 3 4 5]
[ 6 7 8 9 10 11]
[12 13 14 15 16 17]
[18 19 20 21 22 23]]
[[0 1 2 3 4 5]
[0 1 2 3 4 5]
[0 1 2 3 4 5]
[0 1 2 3 4 5]]
Note that np.min(a, axis=1) returns a 1d array of row-wise minimum values.
We than add an extra dimension to it using [:,None]. It then looks like this 2d array:
array([[ 0],
[ 6],
[12],
[18]])
When this 2d array participates in the subtraction, it gets broadcasted into a shape of (4,6), which looks like this:
array([[ 0, 0, 0, 0, 0, 0],
[ 6, 6, 6, 6, 6, 6],
[12, 12, 12, 12, 12, 12],
[18, 18, 18, 18, 18, 18]])
Now, element-wise subtraction happens between the two (4,6) arrays.
Specify keepdims=True to preserve a length-1 dimension in place of the dimension that min collapses, allowing broadcasting to work out naturally:
a - a.min(axis=1, keepdims=True)
This is especially convenient when axis is determined at runtime, but still probably clearer than manually reintroducing the squashed dimension even when the 1 value is fixed.
If you want to use only pandas you can just apply a lambda to every column using min(row)
new_df = pd.DataFrame()
for i, col in enumerate(df.columns):
new_df[col] = df.apply(lambda row: row[i] - min(row))

Categories