I'm working with Python. I want to know if there is a python way to mask a 3d array XYZ (volumetric image) to perform a segmentation analysis such as skeletonization.
I'm handling a 600x600x300 array so reducing the number of candidates to be analyzed is key to performance. I tried np.array[mask] but the array dimension changes to 1. The where method such as this How to Correctly mask 3D Array with numpy performs the change to one value at the time, but skeletonization needs to analyze the neighbors to be performed.
Edit: This is something simple but it might help you to get the idea. It's to create a 3d AOI inside a volume.
# create array with random numbers
Array = np.random.random([10, 10,10])
# create a boolean mask of zeros
maskArr=np.zeros_like(Array, dtype=bool)
# set a few values in the mask to true
maskArr[1:8,1:5,1:3] = 1
# Try to analise the data with mask
process= morphology.skeletonize(Array[maskArr])
this is the error due to the 1-d array:
ValueError: skeletonize requires a 2D or 3D image as input, got 1D.
Related
I have a 3D image which is a numpy array of shape (1314, 489, 3) and looks as follows:
Now I want to calculate the mean RGB color value of the mask (the cob without the black background). Calculating the RGB value for the whole image is easy:
print(np.mean(colormaskcutted, axis=(0, 1)))
>>[186.18434633 88.89164511 46.32022921]
But now I want this mean RGB color value only for the cob. I have a 1D boolean mask
array for the mask with this shape where one value corresponds to all of the 3 color channel values: (1314, 489)
I tried slicing the image array for the mask, as follows:
print(np.mean(colormaskcutted[boolean[:,:,0]], axis=(0, 1)))
>>124.57794089613752
But this returned only one value instead of 3 values for the RGB color.
How can I filter the 3D numpy image for a 1D boolean mask so that the mean RGB color calculation can be performed?
If your question is limited to computing the mean, you don't necessarily need to subset the image. You can simply do, e.g.
np.sum(colormaskcutted*boolean[:,:,None], axis = (0,1))/np.sum(boolean)
P.S. I've played around with indexing, you can amend your original approach as follows:
np.mean(colormaskcutted[boolean,:], axis = 0)
P.P.S. Can't resist some benchmarking. So, the summation approach takes 15.9s (1000 iterations, dimensions like in the example, old computer); the advanced indexing approach is slightly longer, at 17.7s. However, the summation can be optimized further. Using count_nonzero as per Mad Physicist suggestion marginally improves the time to 15.3s. We can also use tensordot to skip creating a temporary array:
np.tensordot(colormaskcutted, boolean, axes = [[0,1], [0,1]])/np.count_nonzero(msk)
This cuts the time to 4.5s.
I have a 3D numpy array points of dimensions [10000x3000x128] where the first dimension is the number of frames, the second dimension the number of points in each frame and the third dimension is a 128-element feature vector associated to each point. What I want to do is to efficiently filter the points in each frame by using a boolean 2D mask of dimensions [10000x3000] and for each of the selected points also take the related 128-dim vector of features. Moreover, in output I need still a 3D vector and not a merged 2D vector and possibly avoid any for loop.
Actually what I'm doing is:
# example of points
points = np.array([10000, 3000, 128])
# fg, bg = 2D dimensional boolean np.array
# init empty lists
fg_points, bg_points = [], []
for i in range(points.shape[0]):
fg_mask_tmp, bg_mask_tmp = fg[i], bg[i]
fg_points.append(points[i,fg_mask_tmp,:])
bg_points.append(points[i,bg_mask_tmp,:])
fg_features, bg_features = np.array(fg_points), np.array(bg_points)
But this is a quite naive solution that for sure can be improved in a more numpy-like way.
In addition, I also tried other solutions as:
fg_features = points[fg,:]
But this solution does not preserve the dimensions of the array merging the two first dimensions since the number of filtered points for each frame can vary.
Another solution I tried is to enlarge the 2D masks by appending a [128] true value to the last dimension, but with any successful result.
Dos anyone know a possible efficient solution?
Thank you in advance for any help!
I have a MxN array of values taken from an experiment. Some of these values are invalid and are set to 0 to indicate such. I can construct a mask of valid/invalid values using
mask = (mat1 == 0) & (mat2 == 0)
which produces an MxN array of bool. It should be noted that the masked locations do not neatly follow columns or rows of the matrix - so simply cropping the matrix is not an option.
Now, I want to take the mean along one axis of my array (E.G end up with a 1xN array) while excluding those invalid values in the mean calculation. Intuitively I thought
np.mean(mat1[mask],axis=1)
should do it, but the mat1[mask] operation produces a 1D array which appears to just be the elements where mask is true - which doesn't help when I only want a mean across one dimension of the array.
Is there a 'python-esque' or numpy way to do this? I suppose I could use the mask to set masked elements to NaN and use np.nanmean - but that still feels kind of clunky. Is there a way to do this 'cleanly'?
I think the best way to do this would be something along the lines of:
masked = np.ma.masked_where(mat1 == 0 && mat2 == 0, array_to_mask)
Then take the mean with
masked.mean(axis=1)
One similarly clunky but efficient way is to multiply your array with the mask, setting the masked values to zero. Then of course you'll have to divide by the number of non-masked values manually. Hence clunkiness. But this will work with integer-valued arrays, something that can't be said about the nan case. It also seems to be fastest for both small and larger arrays (including the masked array solution in another answer):
import numpy as np
def nanny(mat, mask):
mat = mat.astype(float).copy() # don't mutate the original
mat[~mask] = np.nan # mask values
return np.nanmean(mat, axis=0) # compute mean
def manual(mat, mask):
# zero masked values, divide by number of nonzeros
return (mat*mask).sum(axis=0)/mask.sum(axis=0)
# set up dummy data for testing
N,M = 400,400
mat1 = np.random.randint(0,N,(N,M))
mask = np.random.randint(0,2,(N,M)).astype(bool)
print(np.array_equal(nanny(mat1, mask), manual(mat1, mask))) # True
I have a 3D array, containing 10 2D maps of the world. I created a mask of the oceans, and I am trying to create a second array, identical to my first 3D array, but where the oceans are masked for each year. I thought that this should work:
SIF_year = np.ndarray((SIF_year0.shape))
for i in range(0,SIF_year0.shape[0]):
SIF_year[i,:,:] = np.ma.array(SIF_year0[i,:,:], mask = np.logical_not(mask_global_land))
where SIF_year0 is the initial 3D array, and SIF_year is the version that has been masked. However, SIF_year comes out looking just like SIF_year0. Interestingly, if I do:
SIF_year = np.ndarray((SIF_year0.shape))
for i in range(0,SIF_year0.shape[0]):
SIF_test = np.ma.array(SIF_year0[i,:,:], mask = np.logical_not(mask_global_land))
then SIF_test is the masked 2D array I need. I have tried saving the masked array to SIF_test and then resaving it into SIF_year[i,:,:], but then SIF_year looks like SIF_year0 again!
There must be some obvious mistake I'm missing...
I think I have solved the problem by adding an extra step in the loop that replaces the masked values by np.NaN using ma.filled (https://docs.scipy.org/doc/numpy/reference/routines.ma.html):
SIF_year = np.ndarray((SIF_year0.shape))
for i in range(0,SIF_year0.shape[0]):
SIF_test = np.ma.array(SIF_year0[i,:,:], mask = np.logical_not(mask_global_land))
SIF_year[i,:,:] = np.ma.filled(SIF_test, np.nan)
I have a 2-D array of values and need to mask certain elements of that array (with indices taken from a list of ~ 100k tuple-pairs) before drawing random samples from the remaining elements without replacement.
I need something that is both quite fast/efficient (hopefully avoiding for loops) and has a small memory footprint because in practice the master array is ~ 20000 x 20000.
For now I'd be content with something like (for illustration):
xys=[(1,2),(3,4),(6,9),(7,3)]
gxx,gyy=numpy.mgrid[0:100,0:100]
mask = numpy.where((gxx,gyy) not in set(xys)) # The bit I can't get right
# Now sample the masked array
draws=numpy.random.choice(master_array[mask].flatten(),size=40,replace=False)
Fortunately for now I don't need the x,y coordinates of the drawn fluxes - but bonus points if you know an efficient way to do this all in one step (i.e. it would be acceptable for me to identify those coordinates first and then use them to fetch the corresponding master_array values; the illustration above is a shortcut).
Thanks!
Linked questions:
Numpy mask based on if a value is in some other list
Mask numpy array based on index
Implementation of numpy in1d for 2D arrays?
You can do it efficently using sparse coo matrix
from scipy import sparse
xys=[(1,2),(3,4),(6,9),(7,3)]
coords = zip(*xys)
mask = sparse.coo_matrix((numpy.ones(len(coords[0])), coords ), shape= master_array.shape, dtype=bool)
draws=numpy.random.choice( master_array[~mask.toarray()].flatten(), size=10)