I'm certain there's a good way to do this but I'm blanking on the right search terms to google, so I'll ask here instead. My problem is this:
I have 2 2-dimensional array, both with the same dimensions. One array (array 1) is the accumulated precipitation at (x,y) points. The other (array 2) is the topographic height of the same (x,y) grid. I want to sum up array 1 between specific heights of array 2, and create a bar graph with topographic height bins a the x-axis and total accumulated precipitation on the y axis.
So I want to be able to declare a list of heights (say [0, 100, 200, ..., 1000]) and for each bin, sum up all precipitation that occurred within that bin.
I can think of a few complicated ways to do this, but I'm guessing there's probably an easier way that I'm not thinking of. My gut instinct is to loop through my list of heights, mask anything outside of that range, sum up remaining values, add those to a new array, and repeat.
I'm wondering is if there's a built-in numpy or similar library that can do this more efficiently.
This code shows what you're asking for, some explanation in comments:
import numpy as np
def in_range(x, lower_bound, upper_bound):
# returns wether x is between lower_bound (inclusive) and upper_bound (exclusive)
return x in range(lower_bound, upper_bound)
# vectorize allows you to easily 'map' the function to a numpy array
vin_range = np.vectorize(in_range)
# representing your rainfall
rainfall = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# representing your height map
height = np.array([[1, 2, 1], [2, 4, 2], [3, 6, 3]])
# the bands of height you're looking to sum
bands = [[0, 2], [2, 4], [4, 6], [6, 8]]
# computing the actual results you'd want to chart
result = [(band, sum(rainfall[vin_range(height, *band)])) for band in bands]
print(result)
The next to last line is where the magic happens. vin_range(height, *band) uses the vectorized function to create a numpy array of boolean values, with the same dimensions as height, that has True if a value of height is in the range given, or False otherwise.
By using that array to index the array with the target values (rainfall), you get an array that only has the values for which the height is in the target range. Then it's just a matter of summing those.
In more steps than result = [(band, sum(rainfall[vin_range(height, *band)])) for band in bands] (but with the same result):
result = []
for lower, upper in bands:
include = vin_range(height, lower, upper)
values_to_include = rainfall[include]
sum_of_rainfall = sum(values_to_include)
result.append(([lower, upper], sum_of_rainfall))
You can use np.bincount together with np.digitize. digitize creates an array of bin indices from the height array height and the bin boundaries bins. bincount then uses the bin indices to sum the data in array rain.
# set up
rain = np.random.randint(0,100,(5,5))/10
height = np.random.randint(0,10000,(5,5))/10
bins = [0,250,500,750,10000]
# compute
sums = np.bincount(np.digitize(height.ravel(),bins),rain.ravel(),len(bins)+1)
# result
sums
# array([ 0. , 37. , 35.6, 14.6, 22.4, 0. ])
# check against direct method
[rain[(height>=bins[i]) & (height<bins[i+1])].sum() for i in range(len(bins)-1)]
# [37.0, 35.6, 14.600000000000001, 22.4]
An example using the numpy ma module which allows to make masked arrays. From the docs:
A masked array is the combination of a standard numpy.ndarray and a mask. A mask is either nomask, indicating that no value of the associated array is invalid, or an array of booleans that determines for each element of the associated array whether the value is valid or not.
which seems what you need in this case.
import numpy as np
pr = np.random.randint(0, 1000, size=(100, 100)) #precipitation map
he = np.random.randint(0, 1000, size=(100, 100)) #height map
bins = np.arange(0, 1001, 200)
values = []
for vmin, vmax in zip(bins[:-1], bins[1:]):
#creating the masked array, here minimum included inside bin, maximum excluded.
maskedpr = np.ma.masked_where((he < vmin) | (he >= vmax), pr)
values.append(maskedpr.sum())
values is the list of values for each bin, which you can plot.
The numpy.ma.masked_where function returns an array masked where condition is True. So you need to set the condition to be True outside the bins.
The sum() method performs the sum only where the array is not masked.
Related
Say I have an array:
x = np.array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
And a multi-labeled mask:
labels = np.array([[0, 0, 2],
[1, 1, 2],
[1, 1, 2]])
My goal is to sum the entries of x together, grouped by labels. For example:
n_labels = np.max(labels) + 1
out = np.empty(n_labels)
for label in range(n_labels):
mask = labels == label
out[label] = np.sum(x[mask])
>>> out
np.array([1, 20, 15])
However, as x and n_labels become large, I see this being inefficient. Each iteration, you are only summing together a small fraction of the number of entries of x, but still recompute the mask over all of labels (in the expression labels == label) and subsequently index over all of x (in the expression x[mask]). Is there a more efficient way to do this as x and n_labels grow large?
You can use bincount with weights:
np.bincount(labels.ravel(), weights=x.ravel())
out:
array([ 1., 20., 15.])
You don't really have a reason to operate on 2D arrays, so ravel them first:
labels = labels.ravel()
x = x.ravel()
If your labels are already indices, you can use np.argsort along with np.diff and np.add.reduceat:
index = labels.argsort()
splits = np.r_[0, np.flatnonzero(np.diff(labels[index])) + 1]
result = np.add.reduceat(x[index], splits)
labels[index] is the sorted index. Whenever that changes, you enter a new group, and the diff is nonzero. That's what np.flatnonzero(np.diff(labels[index])) finds for you. Since reduceat takes the stop index past the end of the run, you need to add one. np.r_ allows you to prepend zero easily to a 1D array, which is necessary for reduceat to regard t, and also prepend zero., and also prepend zero.he first run (the last is always automatic).
Before you run reduceat, you need to order x into the runs defined by labels, which is what x[index] does.
You can use 2D arrays with another slow and over-engineered approach using np.add.at
sums = np.zeros(labels.max()+1, x.dtype)
np.add.at(sums, labels, x)
sums
Output
array([ 1, 20, 15])
I have a list of numbers a and a list of bins which I shall use to bin the numbers in a using numpy.histogram. the bins are calculated from the mean and standard deviation (std) of a. So the number of bins is B, and the minimum value of the first bin is mean - std, the maximum of the last bin being mean + std. (The text in bold indicates my final goal)
An example goes like the following:
>>> a
array([1, 1, 3, 2, 2, 6])
>>> bins = np.linspace(mean - std, mean + std, B + 1)
array([ 0.79217487, 1.93072496, 3.06927504, 4.20782513]))
>>> numpy.histogram(a, bins = bins)[0]
(array([2, 3, 0], dtype=int32)
However, I want to exclude the rightmost edge of the last bin - i.e. if some value in a exactly equals mean + std, I do not wish to include it in the last bin. The caricature about mean and std is not important, excluding the rightmost edge (aka making it a half-open interval) is. The doc says, unfortunately in this regard:
All but the last (righthand-most) bin is half-open. In other words, if
bins is:
[1, 2, 3, 4] then the first bin is [1, 2) (including 1, but excluding
2) and the second [2, 3). The last bin, however, is [3, 4], which
includes 4.
Is there a simple solution I can employ? That is, one that does not involve manually fixing edges. That is something I can do, but that's not what I'm looking for. Is there a flag I can pass or a different method I can use?
Here's one (kind of crude?) way to turn the make the last bin half-open instead of closed. What I'm doing is subtracting the smallest possible value from the right side of the right-most bin:
a = np.array([1, 1, 3, 2, 2, 6])
B = 3 # (in this example)
bins = np.linspace(a.mean() - a.std(), a.mean() + a.std(), B + 1)
# array([ 0.79217487, 1.93072496, 3.06927504, 4.20782513]))
bins[-1] -= np.finfo(float).eps # <== this is the crucial line
np.histogram(a, bins = bins)
If you're using some other type other than float for the values in a, using a different type in the call to finfo. For example:
np.finfo(float).eps
np.finfo(np.float128).eps
Clip the array first. Do NOT use numpy.clip() function. it would just set out-bounded data to clip high/low value and counted into left bin and right bin. that would create high peaks show on both ends
Following code worked with me. My case is integer array, I guess should be ok with Float array.
clip_low = a.mean() - a.std() # I converted clip to int
clip_high = a.mean() + a.std() # should be ok with float
clip= a[ (clip_low <= a) & (a < clip_high) ] # != clip_high (Do NOT use np.clip() fuxntion
bins= clip_high - clip_low # use your bins #
hist, bins_edge= np.histogram( clip, bins=bins, range=(clip_low,clip_high))
I want to compute a histogram of the differences between all the elements in one array A with all the elements in another array B.
So I want to have a histogram of the following data:
Delta1 = A1-B1
Delta2 = A1-B2
Delta3 = A1-B3
...
DeltaN = A2-B1
DeltaN+1 = A2-B2
DeltaN+2 = A2-B3
...
The point of this calculation is to show that these data has a correlation, even though not every data point has a "partner" in the other array and the correlation is rather noisy in practice.
The problem is that these files are in practice very large, several GB and all entries of the vectors are 64 bit integer numbers with very large differences.
It seems unfeasible to me to convert these data to binary arrays in order to be able to use correlation functions and fourier transforms to compute this.
Here is a small example to give a better taste of what I'm looking at.
This implementation with numpy's searchsorted in a for loop is rather slow.
import numpy as np
import matplotlib.pyplot as plt
timetagsA = [668656283,974986989,1294941174,1364697327,\
1478796061,1525549542,1715828978,2080480431,2175456303,2921498771,3671218524,\
4186901001,4444689281,5087334517,5467644990,5836391057,6249837363,6368090967,8344821453,\
8933832044,9731229532]
timetagsB = [13455,1294941188,1715828990,2921498781,5087334530,5087334733,6368090978,9731229545,9731229800,9731249954]
max_delta_t = 500
nbins = 10000
histo=np.zeros((nbins,2), dtype = float)
histo[:,0]=np.arange(0,nbins)
for i in range(0,int(len(timetagsA))):
delta_t = 0
j = np.searchsorted(timetagsB,timetagsA[i])
while (np.round(delta_t) < max_delta_t and j<len(timetagsB)):
delta_t = timetagsB[j] - timetagsA[i]
if(delta_t<max_delta_t):
histo[int(delta_t),1]+=1
j = j+1
plt.plot(histo[0:50,1])
plt.show()
It would be great if someone could help me to find a faster way to compute this. Thanks in advance!
EDIT
The below solution is supposing that your data is so huge that you can not use np.substract with np.outer and then slice the value you want to keep:
arr_diff = np.subtract.outer(arrB, arrA)
print (arr_diff[(0<arr_diff ) &(arr_diff <max_delta_t)])
# array([ 14, 12, 10, 13, 216, 11, 13, 268], dtype=int64)
with your example data it works but not with too huge data set
ORIGINAL SOLUTION
Let's first suppose your max_delta_t is smaller than the difference between two successive values in timetagsB for an easy way of doing it (then we can try to generalize it).
#create the array instead of list
arrA = np.array(timetagsA)
arrB = np.array(timetagsB)
max_delta_t = np.diff(arrB).min() - 1 #here it's 202 just for the explanation
You can use np.searchsorted in a vectorize way:
# create the array of search
arr_search = np.searchsorted(arrB, arrA) # the position of each element of arrA in arrB
print (arr_search)
# array([1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4, 6, 6, 6, 6, 7, 7, 7],dtype=int64)
You can calculate the difference between the element of arrB corresponding to each element of arrA by slicing arrB with arr_search
# calculate the difference
arr_diff = arrB[arr_search] - arrA
print (arr_diff[arr_diff<max_delta_t]) # finc the one smaller than max_delta_t
# array([14, 12, 10, 13, 11, 13], dtype=int64)
So what you are looking for is then calculated by np.bincount
arr_bins = np.bincount(arr_diff[arr_diff<max_delta_t])
#to make it look like histo but not especially necessary
histo = np.array([range(len(arr_bins)),arr_bins]).T
Now the problem is that, there is some values of difference between arrA and arrB that could not be obtained with this method, when max_delta_t is bigger than two successive values in arrB. Here is one way, naybe not the most efficient depending on the values of your data. For any value of max_delta_t
#need an array with the number of elements in arrB for each element of arrA
# within a max_delta_t range
arr_diff_search = np.searchsorted(arrB, arrA + max_delta_t)- np.searchsorted(arrB, arrA)
#do a loop to calculate all the values you are interested in
list_arr = []
for i in range(arr_diff_search.max()+1):
arr_diff = arrB[(arr_search+i)%len(arrB)][(arr_diff_search>=i)] - arrA[(arr_diff_search>=i)]
list_arr.append(arr_diff[(0<arr_diff)&(arr_diff<max_delta_t)])
Now you can np.concatenate the list_arr and use np.bincount such as:
arr_bins = np.bincount(np.concatenate(list_arr))
histo = np.array([range(len(arr_bins)),arr_bins]).T
I want to summarize a 3d array dat using indices contained in a 2d array idx.
Consider the example below. For each margin along dat[:, :, i], I want to compute the median according to some index idx. The desired output (out) is a 2d array, whose rows record the index and columns record the margin. The following code works but is not very efficient. Any suggestions?
import numpy as np
dat = np.arange(12).reshape(2, 2, 3)
idx = np.array([[0, 0], [1, 2]])
out = np.empty((3, 3))
for i in np.unique(idx):
out[i,] = np.median(dat[idx==i], axis = 0)
print(out)
Output:
[[ 1.5 2.5 3.5]
[ 6. 7. 8. ]
[ 9. 10. 11. ]]
To visualize the problem better, I will refer to the 2x2 dimensions of the array as the rows and columns, and the 3 dimension as depth. I will refer to vectors along the 3rd dimension as "pixels" (pixels have length 3), and planes along the first two dimensions as "channels".
Your loop is accumulating a set of pixels selected by the mask idx == i, and taking the median of each channel within that set. The result is an Nx3 array, where N is the number of distinct incides that you have.
One day, generalized ufuncs will be ubiquitous in numpy, and np.median will be such a function. On that day, you will be able to use reduceat magic1 to do something like
unq, ind = np.unique(idx, return_inverse=True)
np.median.reduceat(dat.reshape(-1, dat.shape[-1]), np.r_[0, np.where(np.diff(unq[ind]))[0]+1])
1 See Applying operation to unevenly split portions of numpy array for more info on the specific type of magic.
Since this is not currently possible, you can use scipy.ndimage.median instead. This version allows you to compute medians over a set of labeled areas in an array, which is exactly what you have with idx. This method assumes that your index array contains N densely packed values, all of which are in range(N). Otherwise the reshaping operations will not work properly.
If that is not the case, start by transforming idx:
_, ind = np.unique(idx, return_inverse=True)
idx = ind.reshape(idx.shape)
OR
idx = np.unique(idx, return_inverse=True)[1].reshape(idx.shape)
Since you are actually computing a separate median for each region and channel, you will need to have a set of labels for each channel. Flesh out idx to have a distinct set of indices for each channel:
chan = dat.shape[-1]
offset = idx.max() + 1
index = np.stack([idx + i * offset for i in range(chan)], axis=-1)
Now index has an identical set of regions defined in each channel, which you can use in scipy.ndimage.median:
out = scipy.ndimage.median(dat, index, index=range(offset * chan)).reshape(chan, offset).T
The input labels must be densely packed from zero to offset * chan for index=range(offset * chan) to work properly, and the reshape operation to have the right number of elements. The final transpose is just an artifact of how the labels are arranged.
Here is the complete product, along with an IDEOne demo of the result:
import numpy as np
from scipy.ndimage import median
dat = np.arange(12).reshape(2, 2, 3)
idx = np.array([[0, 0], [1, 2]])
def summarize(dat, idx):
idx = np.unique(idx, return_inverse=True)[1].reshape(idx.shape)
chan = dat.shape[-1]
offset = idx.max() + 1
index = np.stack([idx + i * offset for i in range(chan)], axis=-1)
return median(dat, index, index=range(offset * chan)).reshape(chan, offset).T
print(summarize(dat, idx))
I have a lot of 750x750 images. I want to take the geometric mean of non-overlapping 5x5 patches from each image, and then for each image, average those geometric means to create one feature per image. I wrote the code below, and it seems to work just fine. But, I know it's not very efficient. Running it on 300 or so images takes around 60 seconds. I have about 3000 images. So, while it works for my purpose, it's not efficient. How can I improve this code?
#each sublist of gmeans will contain a list of 22500 geometric means
#corresponding to the non-overlapping 5x5 patches for a given image.
gmeans = [[],[],[],[],[],[],[],[],[],[],[],[]]
#the loop here populates gmeans.
for folder in range(len(subfolders)):
just_thefilename, colorsourceimages, graycroppedfiles = get_all_images(folder)
for items in graycroppedfiles:
myarray = misc.imread(items)
area_of_big_matrix=750*750
area_of_small_matrix= 5*5
how_many = area_of_big_matrix / area_of_small_matrix
n = 0
p = 0
mylist=[]
while len(mylist) < how_many:
mylist.append(gmean(myarray[n:n+5,p:p+5],None))
n=n+5
if n == 750:
p = p+5
n = 0
gmeans[folder].append(my list)
#each sublist of mean_of_gmeans will contain just one feature per image, the mean of the geometric means of the 5x5 patches.
mean_of_gmeans = [[],[],[],[],[],[],[],[],[],[],[],[]]
for folder in range(len(subfolders)):
for items in range(len(gmeans[0])):
mean_of_gmeans[folder].append((np.mean(gmeans[folder][items],dtype=np.float64)))
I can understand the suggestion to move this to the code review site,
but this problem provides a nice example of the power of using vectorized
numpy and scipy functions, so I'll give an answer.
The function below, cleverly called func, computes the desired value.
The key is to reshape the image into a four-dimensional array. Then
it can be interpreted as a two-dimensional array of two-dimensional
arrays, where the inner arrays are the 5x5 blocks.
scipy.stats.gmean can compute the geometric mean over more than one
dimension, so that is used to reduce the four-dimensional array to the
desired two-dimensional array of geometric means. The return value is the
(arithmetic) mean of those geometric means.
import numpy as np
from scipy.stats import gmean
def func(img, blocksize=5):
# img must be a 2-d array whose dimensions are divisible by blocksize.
if (img.shape[0] % blocksize) != 0 or (img.shape[1] % blocksize) != 0:
raise ValueError("blocksize does not divide the shape of img.")
# Reshape 'img' into a 4-d array 'blocks', so blocks[i, :, j, :] is
# the subarray with shape (blocksize, blocksize).
blocks_nrows = img.shape[0] // blocksize
blocks_ncols = img.shape[1] // blocksize
blocks = img.reshape(blocks_nrows, blocksize, blocks_ncols, blocksize)
# Compute the geometric mean over axes 1 and 3 of 'blocks'. This results
# in the array of geometric means with size (blocks_nrows, blocks_ncols).
gmeans = gmean(blocks, axis=(1, 3), dtype=np.float64)
# The return value is the average of 'gmeans'.
avg = gmeans.mean()
return avg
For example, here the function is applied to an array with shape (750, 750).
In [358]: np.random.seed(123)
In [359]: img = np.random.randint(1, 256, size=(750, 750)).astype(np.uint8)
In [360]: func(img)
Out[360]: 97.035648309350179
It isn't easy to verify that that is the correct result, so here is a much smaller example:
In [365]: np.random.seed(123)
In [366]: img = np.random.randint(1, 4, size=(3, 6))
In [367]: img
Out[367]:
array([[3, 2, 3, 3, 1, 3],
[3, 2, 3, 2, 3, 2],
[1, 2, 3, 2, 1, 3]])
In [368]: func(img, blocksize=3)
Out[368]: 2.1863131342986666
Here is the direct calculation:
In [369]: 0.5*(gmean(img[:,:3], axis=None) + gmean(img[:, 3:], axis=None))
Out[369]: 2.1863131342986666