I'm working on comparing values in a numpy matrix.
Initially I wanted to check if any of the values in the matrix m were smaller than X, so I used:
(m<(X)).any()
Which worked fine, but now I would like it to ignore all 0 values in the matrix, so in essence to tell me if any values in the matrix m are in that range 0 < m < X.
I've figured a way to do this by going into a while look put was hoping that there might be a similar function to that above that could do the trick?
Many Thanks
Much like here, you can do
np.where(np.logical_and(0<a,a<6))
And it will give you two arrays, which tell you the locations in your matrix.
(array([0, 0, 1, 1, 1], dtype=int32),
array([1, 2, 0, 1, 2], dtype=int32))
Unlike the above, you have an n-dimensional array, and the output of that may not be as useful as using a masked array
b=np.ma.masked_where(np.logical_or(a<=0,a>=6),a)
b
Out[40]:
masked_array(data =
[[-- 1 2]
[3 4 5]
[-- -- --]],
mask =
[[ True False False]
[False False False]
[ True True True]],
fill_value = 999999)
Since that can give you a more useful array that preserves location.
Related
I am trying to use np.where() function with nested lists.
I would like to find an index with a given condition of the first layer of the nested list.
For example, if I have the following code
arr = [[1,1], [2,2],[3,3]]
a = np.where(arr == [2,2])
then ideally I would like code to return 'a' as 1.
Since [2,2] is in index 1 of the nested list.
However, I am just getting a empty array back as a result.
Of course, I can make it work easily by implementing external for loop such as
for n in range(len(arr)):
if arr[n] == [2,2]:
a = n
but I would like to implement this simply within the function np.where(write the entire code here).
Is there a way to do this?
Well you can write your own function to do so:
You'll need to
Find every line equal to what you looking for
Get indices of found rows (You can use where):
numpy compression
You can use compression operator to see if each line satisfies the condition. Such as:
np_arr = np.array(
[1, 2, 3, 4, 5]
)
print(np_arr < 3)
This will return a boolean where every element is True or False where the condition is satisfied:
[ True True False False False]
For a 2D array you'll get a 2D boolean array:
to_find = np.array([2, 2])
np_arr = np.array(
[
[1, 1],
[2, 2],
[3, 3],
[2, 2]
]
)
print(np_arr == to_find)
The result is:
[[False False]
[ True True]
[False False]
[ True True]]
Now we are looking for lines with all True values. So we can use all method of ndarray. And we will provide the axis we are looking to look to all. X, Y or Both. We want to look to x axis:
to_find = np.array([2, 2])
np_arr = np.array(
[
[1, 1],
[2, 2],
[3, 3],
[2, 2]
]
)
print((np_arr == to_find).all(axis=1))
The result is:
[False True False True]
Get indices of Trues
At the end you are looking for indices where the values are True:
np.where((np_arr == to_find).all(axis=1))
The result would be:
(array([1, 3]),)
The best solution is that mentioned by #Michael Szczesny, but using np.where you can do this too:
a = np.where(np.array(arr) == [2, 2])[0]
resulted_ind = np.where(np.bincount(a) == 2)[0] # --> [1]
numpy runs in Python, so you can use both the basic Python lists and numpy arrays (which are more like MATLAB matrices)
A list of lists:
In [43]: alist = [[1,1], [2,2],[3,3]]
A list has an index method, which tests against each element of the list (elements here are 2 element lists):
In [44]: alist.index([2,2])
Out[44]: 1
In [45]: alist.index([2,3])
Traceback (most recent call last):
Input In [45] in <cell line: 1>
alist.index([2,3])
ValueError: [2, 3] is not in list
alist==[2,2] returns False, because the list is not the same as the [2,2] list.
If we make an array from that list:
In [46]: arr = np.array(alist)
In [47]: arr
Out[47]:
array([[1, 1],
[2, 2],
[3, 3]])
we can do an == test - but it compares numeric elements.
In [48]: arr == np.array([2,2])
Out[48]:
array([[False, False],
[ True, True],
[False, False]])
Underlying this comparison is the concept of broadcasting, allow it to compare a (3,2) array with a (2,) (a 2d with a 1d). Here's its trivial, but it can be much more complicated.
To find rows where all values are True, use:
In [50]: (arr == np.array([2,2])).all(axis=1)
Out[50]: array([False, True, False])
and where finds the True in that array (the result is a tuple with 1 array):
In [51]: np.where(_)
Out[51]: (array([1]),)
In Octave the equivalent is:
>> arr = [[1,1];[2,2];[3,3]]
arr =
1 1
2 2
3 3
>> all(arr == [2,2],2)
ans =
0
1
0
>> find(all(arr == [2,2],2))
ans = 2
I have an array, A. Its length may vary but it’s always filled with 1’s and/or 0's.
A = np.array([1,0,1,0])
A gets passed into a function that produces array B.
B = np.array([0.75, 0.25])
B’s length is always equal to the number of 1's in A.
How can I most efficiently update A (or create a new array) that equals
[0.75, 0, 0.25, 0]
in this example? My hope is for it to work with any size array A that meets the constraints I’ve laid out for B above. The first value of B is always the first 1 in A and so on.
I got it to work by converting them to lists and looping
pos = 0
for i in range(len(a)):
if a[i] == 1:
a[i] = b[pos]
pos += 1
But I’m hoping for a better solution because I’ll have to do this a lot and A and B could potentially be much larger.
In [1]: A = np.array([1,0,1,0])
First you want to identify the nonzero elements. There are several ways:
In [2]: A.astype(bool)
Out[2]: array([ True, False, True, False])
In [3]: A>0
Out[3]: array([ True, False, True, False])
In [4]: np.nonzero(A)
Out[4]: (array([0, 2]),)
Any of these can be used to select those elements; both for getting and setting:
In [5]: A[np.nonzero(A)]
Out[5]: array([1, 1])
In [6]: A[np.nonzero(A)] = [2,3]
In [7]: A
Out[7]: array([2, 0, 3, 0])
In [8]: A[[0,2]]
Out[8]: array([2, 3])
But if we try to assign the two float values, they get truncated:
In [9]: A[[0,2]] = [.25, .75]
In [10]: A
Out[10]: array([0, 0, 0, 0])
A needs to be float dtype:
In [11]: A1=A.astype(float)
In [12]: A1[[0,2]] = [.25, .75]
In [13]: A1
Out[13]: array([0.25, 0. , 0.75, 0. ])
I have a boolean mask shaped (M, N). Each column in the mask may have a different number of True elements, but is guaranteed to have at least two. I want to find the row index of the last two such elements as efficiently as possible.
If I only wanted one element, I could do something like (M - 1) - np.argmax(mask[::-1, :], axis=0). However, that won't help me get the second-to-last index.
I've come up with an iterative solution using np.where or np.nonzero:
M = 4
N = 3
mask = np.array([
[False, True, True],
[True, False, True],
[True, False, True],
[False, True, False]
])
result = np.zeros((2, N), dtype=np.intp)
for col in range(N):
result[:, col] = np.flatnonzero(mask[:, col])[-2:]
This creates the expected result:
array([[1, 0, 1],
[2, 3, 2]], dtype=int64)
I would like to avoid the final loop. Is there a reasonably vectorized form of the above? I am looking for specifically two rows, which are always guaranteed to exist. A general solution for arbitrary element counts is not required.
An argsort does it -
In [9]: np.argsort(mask,axis=0,kind='stable')[-2:]
Out[9]:
array([[1, 0, 1],
[2, 3, 2]])
Another with cumsum -
c = mask.cumsum(0)
out = np.where((mask & (c>=c[-1]-1)).T)[1].reshape(-1,2).T
Specifically for exactly two rows, one way with argmax -
c = mask.copy()
idx = len(c)-c[::-1].argmax(0)-1
c[idx,np.arange(len(idx))] = 0
idx2 = len(c)-c[::-1].argmax(0)-1
out = np.vstack((idx2,idx))
I have data which are stored in a numpy array with n rows and p columns.
I would like to check which rows are fully finite and store this information in a boolean array to use it as a mask somewhere.
I have solved it for the p=2 case, but would like to solve it for all cases
My code looks like this:
raw_test = np.array([[0, numpy.NaN], [0, 0], [numpy.NaN, numpy.NaN]])
test = np.isfinite(raw_test)
def multiply(x):
return x[0] * x[1]
numpy.apply_along_axis(multiply, 1, test)
You can use numpy.isnan to check which of the items are NaN and then find the indices of the rows which with all True's using numpy.all and numpy.where.
>>> np.isnan(raw_test)
array([[False, True],
[False, False],
[ True, True]], dtype=bool)
>>> np.all(np.isnan(raw_test), axis=1)
array([False, False, True], dtype=bool)
>>> np.where(np.all(np.isnan(raw_test), axis=1))[0]
array([2])
Another option is to use a masked_array:
import numpy as np
raw_test = np.array([[0, np.NaN], [0, 0], [np.NaN, np.NaN]])
test = np.ma.masked_invalid(raw_test)
print(test)
# [[0.0 --]
# [0.0 0.0]
# [-- --]]
def multiply(x):
return x[0] * x[1]
print(np.apply_along_axis(multiply, 1, test))
yields
[ nan 0. nan]
I have 5 grayscale images in the form of 288x288 ndarrays. The values in each ndarray are just numpy.float32 numbers ranging from 0.0 to 255.0. For each ndarray, I've created a numpy.ma.MaskedArray object as follows:
def bool_row(row):
return [value == 183. for value in row]
mask = [bool_row(row) for row in nd_array_1]
masked_array_1 = ma.masked_array(nd_array_1, mask=mask)
The value 183. represents "garbage" in the image. All 5 images have a bit of "garbage" in them. I want to take the median of the masked images, where taking the median for each point should ignore any masked values. The result would be the correct image with no garbage.
When I try:
ma.median([masked_array_1, masked_array_2, masked_array_3, masked_array_4, masked_array_5], axis=0)
I get what seems to be the median except instead of ignoring masked values, it treats them as 183., so the result just has the superimposed garbage from all the pictures. When I just take the median of two masked images:
ma.median([masked_array_1, masked_array_2], axis=0)
It looks like it started to do the right thing, but then placed the value of 183. even where both masked arrays contain a MaskedConstant.
I could do something like the following, but I feel there's probably a way to make ma.median just behave as expected:
unmasked_array_12 = ma.median([masked_array_1, masked_array_2], axis=0)
mask = [bool_row(row) for row in unmasked_array_12]
masked_array_12 = ma.masked_array(unmasked_array_12, mask=mask)
unmasked_array_123 = ma.median([masked_array_12, masked_array_3], axis=0)
mask = [bool_row(row) for row in unmasked_array_123]
masked_array_123 = ma.masked_array(unmasked_array_123, mask=mask)
...
How do I make ma.median work as expected without resorting to the above unpleasantness?
I suspect the problem is in how ma.median handles a non-array argument. It might be converting a list to a plain numpy array, without checking the types of the elements of the list.
Consider the following example with 1-D arrays:
In [64]: a = ma.array([1, 2, -10, 3, -10, -10], mask=[0,0,1,0,1,1])
In [65]: b = ma.array([1, 2, -10, -10, 4, -10], mask=[0,0,1,1,0,1])
In [66]: a
Out[66]:
masked_array(data = [1 2 -- 3 -- --],
mask = [False False True False True True],
fill_value = 999999)
In [67]: b
Out[67]:
masked_array(data = [1 2 -- -- 4 --],
mask = [False False True True False True],
fill_value = 999999)
The following are not correct--it appears to ignore the masks:
In [68]: ma.median([a, b])
Out[68]: -4.5
In [69]: ma.median([a, b], axis=0)
Out[69]:
masked_array(data = [ 1. 2. -10. -3.5 -3. -10. ],
mask = False,
fill_value = 1e+20)
However, if I first create a new masked array using ma.array, ma.median handles it correctly:
In [70]: c = ma.array([a, b])
In [71]: c
Out[71]:
masked_array(data =
[[1 2 -- 3 -- --]
[1 2 -- -- 4 --]],
mask =
[[False False True False True True]
[False False True True False True]],
fill_value = 999999)
In [72]: ma.median(c)
Out[72]: 2.0
In [73]: ma.median(c, axis=0)
Out[73]:
masked_array(data = [1.0 2.0 -- 3.0 4.0 --],
mask = [False False True False False True],
fill_value = 1e+20)
So to fix your problem, it might be as simple as replacing this:
ma.median([masked_array_1, masked_array_2, masked_array_3, masked_array_4, masked_array_5], axis=0)
with this:
stacked = ma.array([masked_array_1, masked_array_2, masked_array_3, masked_array_4, masked_array_5])
ma.median(stacked, axis=0)
you can use the following to get rid of all of the 183 values just while calculating the median:
masked_arrays=[masked_array_1, masked_array_2, masked_array_3]
no_junk_arrays=[[x for x in masked_array if x is not 183] for masked_array in masked_arrays]
ma.median(no_junk_arrays)
For example
>>> masked_array_1 = [1,183,4]
>>> masked_array_2 = [1,183,2]
>>> masked_array_3 = [2,183,2]
>>> masked_arrays=[masked_array_1,masked_array_2,masked_array_3]
>>> no_junk_arrays=[[x for x in masked_array if x is not 183] for masked_array in masked_arrays]
>>> no_junk_arrays
[[1, 4], [1, 2], [2, 2]]
I'm sure it can be done if you find the clever sequence of numpy functions to invoke. But it can also be done naively:
def merge(a1, a2):
result = []
for x, y in zip(a1, a2):
if x == 183:
x = y
result.append(x)
return result
array_1 = [1, 183, 2]
array_2 = [1, 183, 183]
array_3 = [183, 4, 2]
print merge(merge(array_1, array_2), array_3)
If the result runs really too slowly, you can try the same code on PyPy instead of CPython.
If what you are after is fetching the non-nan value for every pixel, you could do someting along the lines of:
stacked_imgs = np.dstack((img1, img2, img3))
mask = stacked_imgs == 183
# Find the first False, i.e. non-183 entry, along stack axis
index = np.argmin(mask, axis=-1)
correct_image = stacked_image[..., index]
If all non-183 entries for a given pixel are always the same, this will give you the result you are after.