I have a 2D numpy array that I need to mask based on a condition so that I can apply an operation to the masked array then revert the masked values back to the original.
For example:
import numpy as np
array = np.random.random((3,3))
condition = np.random.randint(0, 2, (3,3))
masked = np.ma.array(array, mask=condition)
masked += 2.0
But how can I change the masked values back to the original and "remove" the mask after applying a given operation to the masked array?
The reason why I need to do this is that I am generating a boolean array based on a set of conditions and I need to modify the elements of the array that satisfy the condition.
I could use boolean indexing to do this with a 1D array, but with the 2D array I need to retain its original shape ie. not return a 1D array with only the values satisfying the condition(s).
The accepted answer doesn't answer the question. Assigning the mask to False works in practice but many algorithms do not support masked arrays (e.g. scipy.linalg.lstsq()) and this method doesn't get rid of it. So you will experience an error like this:
ValueError: masked arrays are not supported
The only way to really get rid of the mask is assigning the variable only to the data of the masked array.
import numpy as np
array = np.random.random((3,3))
condition = np.random.randint(0, 2, (3,3))
masked = np.ma.array(array, mask=condition)
masked += 2.0
masked.mask = False
hasattr(masked, 'mask')
>> True
Assigning the variable to the data using the MaskedArray data attribute:
masked = masked.data
hasattr(masked, 'mask')
>> False
You already have it: it's called array!
This is because while masked makes sure you only increment certain values in the matrix, the data is never actually copied. So once your code executes, array has the elements at condition incremented, and the rest remain unchanged.
Related
I have a MxN array of values taken from an experiment. Some of these values are invalid and are set to 0 to indicate such. I can construct a mask of valid/invalid values using
mask = (mat1 == 0) & (mat2 == 0)
which produces an MxN array of bool. It should be noted that the masked locations do not neatly follow columns or rows of the matrix - so simply cropping the matrix is not an option.
Now, I want to take the mean along one axis of my array (E.G end up with a 1xN array) while excluding those invalid values in the mean calculation. Intuitively I thought
np.mean(mat1[mask],axis=1)
should do it, but the mat1[mask] operation produces a 1D array which appears to just be the elements where mask is true - which doesn't help when I only want a mean across one dimension of the array.
Is there a 'python-esque' or numpy way to do this? I suppose I could use the mask to set masked elements to NaN and use np.nanmean - but that still feels kind of clunky. Is there a way to do this 'cleanly'?
I think the best way to do this would be something along the lines of:
masked = np.ma.masked_where(mat1 == 0 && mat2 == 0, array_to_mask)
Then take the mean with
masked.mean(axis=1)
One similarly clunky but efficient way is to multiply your array with the mask, setting the masked values to zero. Then of course you'll have to divide by the number of non-masked values manually. Hence clunkiness. But this will work with integer-valued arrays, something that can't be said about the nan case. It also seems to be fastest for both small and larger arrays (including the masked array solution in another answer):
import numpy as np
def nanny(mat, mask):
mat = mat.astype(float).copy() # don't mutate the original
mat[~mask] = np.nan # mask values
return np.nanmean(mat, axis=0) # compute mean
def manual(mat, mask):
# zero masked values, divide by number of nonzeros
return (mat*mask).sum(axis=0)/mask.sum(axis=0)
# set up dummy data for testing
N,M = 400,400
mat1 = np.random.randint(0,N,(N,M))
mask = np.random.randint(0,2,(N,M)).astype(bool)
print(np.array_equal(nanny(mat1, mask), manual(mat1, mask))) # True
I have a numpy boolean vector of shape 1 x N, and an 2d array with shape 160 x N. What is a fast way of subsetting the columns of the 2d array such that for each index of the boolean vector that has True in it, the column is kept, and for each index of the boolean vector that has False in it, the column is discarded?
If you call the vector mask and the array features, i've found the following to be far too slow: np.array([f[mask] for f in features])
Is there a better way? I feel like there has to be, right?
You can try this,
new_array = 2d_array[:,bool_array==True]
So depending on the axes you can select which one you want to remove. In case you get a 1-d array, then you can just reshape it and get the required array. This method will be faster also.
I have a 3D array, containing 10 2D maps of the world. I created a mask of the oceans, and I am trying to create a second array, identical to my first 3D array, but where the oceans are masked for each year. I thought that this should work:
SIF_year = np.ndarray((SIF_year0.shape))
for i in range(0,SIF_year0.shape[0]):
SIF_year[i,:,:] = np.ma.array(SIF_year0[i,:,:], mask = np.logical_not(mask_global_land))
where SIF_year0 is the initial 3D array, and SIF_year is the version that has been masked. However, SIF_year comes out looking just like SIF_year0. Interestingly, if I do:
SIF_year = np.ndarray((SIF_year0.shape))
for i in range(0,SIF_year0.shape[0]):
SIF_test = np.ma.array(SIF_year0[i,:,:], mask = np.logical_not(mask_global_land))
then SIF_test is the masked 2D array I need. I have tried saving the masked array to SIF_test and then resaving it into SIF_year[i,:,:], but then SIF_year looks like SIF_year0 again!
There must be some obvious mistake I'm missing...
I think I have solved the problem by adding an extra step in the loop that replaces the masked values by np.NaN using ma.filled (https://docs.scipy.org/doc/numpy/reference/routines.ma.html):
SIF_year = np.ndarray((SIF_year0.shape))
for i in range(0,SIF_year0.shape[0]):
SIF_test = np.ma.array(SIF_year0[i,:,:], mask = np.logical_not(mask_global_land))
SIF_year[i,:,:] = np.ma.filled(SIF_test, np.nan)
I'm with issues translating Matlab to Python code, specially when it involves matrices / arrays.
Here, I have a 2D numpy array called output and I am computing a vector of row-major indexes t_ind of the elements that are higher than a variable vmax:
t_ind = np.flatnonzero(output > vmax)
Now I'd like to use these indexes to create a matrix based on that. In MATLAB, I could do that directly:
output(t_ind) = 2*vmax - output(t_ind);
But in Python this does not work. Specifically, I get an IndexError stating that I'm out of bounds.
I tried to figure it out, but the most elegant solution that I could think involves using np.hstack() to transform the array into a vector, compare the indexes, collect them in another variable and come back.
Could you shed some light on this?
For a 1D array, the use of np.flatnonzero is correct. Specifically, the equivalent numpy syntax would be:
output[t_ind] = 2*vmax - output[t_ind]
Also, you can achieve the same thing using Boolean operators. MATLAB also has this supported, and so if you want to translate between the two, Boolean (or logical in the MATLAB universe) is the better way to go:
output[output > vmax] = 2*vmax - output[output > vmax]
For the 2D case, you don't use np.flatnonzero. Use np.where instead:
t_ind = np.where(output > v_max)
output[t_ind] = 2*vmax - output[t_ind]
t_ind will return a tuple of numpy arrays where the first element gives you the row locations and the second element gives you the column locations of those values that satisfied the Boolean condition that is placed into np.where.
As a small note, the case for Boolean indexing still applies to any dimensions of the matrix you desire. However, np.flatnonzero would compute row-major indices of those points that satisfy the input condition into np.flatnonzero. The reason why you're getting an error is because you are trying to use row-major indices to access a 2D array. Though linear indexing is supported in Python, this is not supported in numpy - you would have to access each dimension independently to do this indexing, which is what specifying t_ind as the input indexes into output would be doing.
Numpy supports both boolean indexing and multi-dimensional indexing so you don't need to jump through all those hoops, here are two ways to get what you want:
# The setup
import numpy as np
a = np.random.random((3, 4))
vmax = 1.2
output = np.zeros(a.shape, a.dtype)
# Method one, use a boolean array to index
mask = a > .5
output[mask] = 2 * vmax - a[mask]
# Method tow, use indices to index.
t_ind = np.nonzero(a > .5)
output[t_ind] = 2 * vmax - a[t_ind]
I have a 2-D array of values and need to mask certain elements of that array (with indices taken from a list of ~ 100k tuple-pairs) before drawing random samples from the remaining elements without replacement.
I need something that is both quite fast/efficient (hopefully avoiding for loops) and has a small memory footprint because in practice the master array is ~ 20000 x 20000.
For now I'd be content with something like (for illustration):
xys=[(1,2),(3,4),(6,9),(7,3)]
gxx,gyy=numpy.mgrid[0:100,0:100]
mask = numpy.where((gxx,gyy) not in set(xys)) # The bit I can't get right
# Now sample the masked array
draws=numpy.random.choice(master_array[mask].flatten(),size=40,replace=False)
Fortunately for now I don't need the x,y coordinates of the drawn fluxes - but bonus points if you know an efficient way to do this all in one step (i.e. it would be acceptable for me to identify those coordinates first and then use them to fetch the corresponding master_array values; the illustration above is a shortcut).
Thanks!
Linked questions:
Numpy mask based on if a value is in some other list
Mask numpy array based on index
Implementation of numpy in1d for 2D arrays?
You can do it efficently using sparse coo matrix
from scipy import sparse
xys=[(1,2),(3,4),(6,9),(7,3)]
coords = zip(*xys)
mask = sparse.coo_matrix((numpy.ones(len(coords[0])), coords ), shape= master_array.shape, dtype=bool)
draws=numpy.random.choice( master_array[~mask.toarray()].flatten(), size=10)