I have multiple numpy arrays and I want to create new arrays doing something that is like an XOR ... but not quite.
My input is two arrays, array1 and array2.
My output is a modified (or new array, I don't really care) version of array1.
The modification is elementwise, by doing the following:
1.) If either array has 0 for the given index, then the index is left unchanged.
2.) If array1 and array2 are nonzero, then the modified array is assigned the value of array1's index subtracted by array2's index, down to a minimum of zero.
Examples:
array1: [0, 3, 8, 0]
array2: [1, 1, 1, 1]
output: [0, 2, 7, 0]
array1: [1, 1, 1, 1]
array2: [0, 3, 8, 0]
output: [1, 0, 0, 1]
array1: [10, 10, 10, 10]
array2: [8, 12, 8, 12]
output: [2, 0, 2, 0]
I would like to be able to do this with say, a single numpy.copyto statement, but I don't know how. Thank you.
edit:
it just hit me. could I do:
new_array = np.zeros(size_of_array1)
numpy.copyto(new_array, array1-array2, where=array1>array2)
Edit 2: Since I have received several answers very quickly I am going to time the different answers against each other to see how they do. Be back with results in a few minutes.
Okay, results are in:
array of random ints 0 to 5, size = 10,000, 10 loops
1.)using my np.copyto method
2.)using clip
3.)using maximum
0.000768184661865
0.000391960144043
0.000403165817261
Kasramvd also provided some useful timings below
You can use a simple subtraction and clipping the result with zero as the min:
(arr1 - arr2).clip(min=0)
Demo:
In [43]: arr1 = np.array([0,3,8,0]); arr2 = np.array([1,1,1,1])
In [44]: (arr1 - arr2).clip(min=0)
Out[44]: array([0, 2, 7, 0])
On large arrays it's also faster than maximum approach:
In [51]: arr1 = np.arange(10000); arr2 = np.arange(10000)
In [52]: %timeit np.maximum(0, arr1 - arr2)
22.3 µs ± 1.77 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [53]: %timeit (arr1 - arr2).clip(min=0)
20.9 µs ± 167 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [54]: arr1 = np.arange(100000); arr2 = np.arange(100000)
In [55]: %timeit np.maximum(0, arr1 - arr2)
671 µs ± 5.69 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [56]: %timeit (arr1 - arr2).clip(min=0)
648 µs ± 4.43 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Note that if it's possible for arr2 to have negative values you should consider using an abs function on arr2 to get the expected result:
(arr1 - abs(arr2)).clip(min=0)
In [73]: np.maximum(0,np.array([0,3,8,0])-np.array([1,1,1,1]))
Out[73]: array([0, 2, 7, 0])
This doesn't explicitly address
If either array has 0 for the given index, then the index is left unchanged.
but the results match for all examples:
In [74]: np.maximum(0,np.array([1,1,1,1])-np.array([0,3,8,0]))
Out[74]: array([1, 0, 0, 1])
In [75]: np.maximum(0,np.array([10,10,10,10])-np.array([8,12,8,12]))
Out[75]: array([2, 0, 2, 0])
You can first simply subtract the arrays and then use boolean array indexing on the subtracted result to assign 0 where there are negative values as in:
# subtract
In [43]: subtracted = arr1 - arr2
# get a boolean mask by checking for < 0
# index into the array and assign 0
In [44]: subtracted[subtracted < 0] = 0
In [45]: subtracted
Out[45]: array([0, 2, 7, 0])
Applying the same for the other inputs specified by OP:
In [46]: arr1 = np.array([1, 1, 1, 1])
...: arr2 = np.array([0, 3, 8, 0])
In [47]: subtracted = arr1 - arr2
In [48]: subtracted[subtracted < 0] = 0
In [49]: subtracted
Out[49]: array([1, 0, 0, 1])
And for the third input arrays:
In [50]: arr1 = np.array([10, 10, 10, 10])
...: arr2 = np.array([8, 12, 8, 12])
In [51]: subtracted = arr1 - arr2
In [52]: subtracted[subtracted < 0] = 0
In [53]: subtracted
Out[53]: array([2, 0, 2, 0])
Related
How to can I compute variance without zero elements?
For example
np.var([[1, 1], [1, 2]], axis=1) -> [0, 0.25]
I need:
var([[1, 1, 0], [1, 2, 0]], axis=1) -> [0, 0.25]
Is it what your are looking for? You can filter out columns where all values are 0 (or at least one value is not 0).
m = np.array([[1, 1, 0], [1, 2, 0]])
np.var(m[:, np.any(m != 0, axis=0)], axis=1)
# Output
array([0. , 0.25])
V1
You can use a masked array:
data = np.array([[1, 1, 0], [1, 2, 0]])
np.ma.array(data, mask=(data == 0)).var(axis=1)
The result is
masked_array(data=[0. , 0.25],
mask=False,
fill_value=1e+20)
The raw numpy array is the data attribute of the resulting masked array:
>>> np.ma.array(data, mask=(data == 0)).var(axis=1).data
array([0. , 0.25])
V2
Without masked arrays, the operation of removing a variable number of elements in each row is a bit tricky. It would be simpler to implement the variance in terms of the formula sum(x**2) / N - (sum(x) / N)**2 and partial reduction of ufuncs.
First we need to find the split indices and segment lengths. In the general case, that looks like
lens = np.count_nonzero(data, axis=1)
inds = np.r_[0, lens[:-1].cumsum()]
Now you can operate on the raveled masked data:
mdata = data[data != 0]
mdata2 = mdata**2
var = np.add.reduceat(mdata2, inds) / lens - (np.add.reduceat(mdata, inds) / lens)**2
This gives you the same result for var (probably more efficiently than the masked version by the way):
array([0. , 0.25])
V3
The var function appears to use the more traditional formula (x - x.mean()).mean(). You can implement that using the quantities above with just a bit more work:
means = (np.add.reduceat(mdata, inds) / lens).repeat(lens)
var = np.add.reduceat((mdata - means)**2, inds) / lens
Comparison
Here is a quick benchmark for the two approaches:
def nzvar_v1(data):
return np.ma.array(data, mask=(data == 0)).var(axis=1).data
def nzvar_v2(data):
lens = np.count_nonzero(data, axis=1)
inds = np.r_[0, lens[:-1].cumsum()]
mdata = data[data != 0]
return np.add.reduceat(mdata**2, inds) / lens - (np.add.reduceat(mdata, inds) / lens)**2
def nzvar_v3(data):
lens = np.count_nonzero(data, axis=1)
inds = np.r_[0, lens[:-1].cumsum()]
mdata = data[data != 0]
return np.add.reduceat((mdata - (np.add.reduceat(mdata, inds) / lens).repeat(lens))**2, inds) / lens
np.random.seed(100)
data = np.random.randint(10, size=(1000, 1000))
%timeit nzvar_v1(data)
18.3 ms ± 278 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit nzvar_v2(data)
5.89 ms ± 69.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit nzvar_v3(data)
11.8 ms ± 62.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
So for a large dataset, the second approach, while requiring a bit more code, appears to be ~3x faster than masked arrays and ~2x faster than using the traditional formulation.
I need to check if an array A contains all elements of another array B. If not, output the missing elements. Both A and B are integers, and B is always from 0 to N with an interval of 1.
import numpy as np
A=np.array([1,2,3,6,7,8,9])
B=np.arange(10)
I know that I can use the following to check if there is any missing elements, but it does not give the index of the missing element.
np.all(elem in A for elem in B)
Is there a good way in python to output the indices of the missing elements?
IIUC you can try the following and assuming that B always is an "index" list:
[i for i in B if i not in A]
The output would be : [0, 4, 5]
Best way to do it with numpy
Numpy actually has a function to perform this : numpy.insetdiff1d
np.setdiff1d(B, A)
# Which returns
array([0, 4, 5])
You can use enumerate to get both index and content of a list. The following code would do what you want
idx = [idx for idx, element in enumerate(B) if element not in A]
I am assuming we want to get the elements exclusive to B, when compared to A.
Approach #1
Given the specific of B is always from 0 to N with an interval of 1, we can use a simple mask-based one -
mask = np.ones(len(B), dtype=bool)
mask[A] = False
out = B[mask]
Approach #2
Another one that edits B and would be more memory-efficient -
B[A] = -1
out = B[B>=0]
Approach #3
A more generic case of integers could be handled differently -
def setdiff_for_ints(B, A):
N = max(B.max(), A.max()) - min(min(A.min(),B.min()),0) + 1
mask = np.zeros(N, dtype=bool)
mask[B] = True
mask[A] = False
out = np.flatnonzero(mask)
return out
Sample run -
In [77]: A
Out[77]: array([ 1, 2, 3, 6, 7, 8, -6])
In [78]: B
Out[78]: array([1, 3, 4, 5, 7, 9])
In [79]: setdiff_for_ints(B, A)
Out[79]: array([4, 5, 9])
# Using np.setdiff1d to verify :
In [80]: np.setdiff1d(B, A)
Out[80]: array([4, 5, 9])
Timings -
In [81]: np.random.seed(0)
...: A = np.unique(np.random.randint(-10000,100000,1000000))
...: B = np.unique(np.random.randint(0,100000,1000000))
# #Hugolmn's soln with np.setdiff1d
In [82]: %timeit np.setdiff1d(B, A)
4.78 ms ± 96.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [83]: %timeit setdiff_for_ints(B, A)
599 µs ± 6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
I need to find the index of the minimum per row in a 2-dim array which at the same time satifies additional constraint on the column values. Having two arrays a and b
a = np.array([[1,0,1],[0,0,1],[0,0,0],[1,1,1]])
b = np.array([[1,-1,2],[4,-1,1],[1,-1,2],[1,2,-1]])
the objective is to find the indicies for which holds that a == 1, b is positive and b is the minimumim value of the row. Fulfilling the first two conditions is easy
idx = np.where(np.logical_and(a == 1, b > 0))
which yields the indices:
(array([0, 0, 1, 3, 3]), array([0, 2, 2, 0, 1]))
Now I need to filter the duplicate row entries (stick with minimum value only) but I cannot think of an elegant way to achieve that. In the above example the result should be
(array([0,1,3]), array([0,2,0]))
edit:
It should also work for a containing other values than just 0 and 1.
Updated to trying to understand the problem better, try:
c = b*(b*a > 0)
np.where(c==np.min(c[np.nonzero(c)]))
Output:
(array([0, 1, 3], dtype=int64), array([0, 2, 0], dtype=int64))
Timings:
Method 1
a = np.array([[1,0,1],[0,0,1],[0,0,0],[1,1,1]])
b = np.array([[1,-1,2],[4,-1,1],[1,-1,2],[1,2,-1]])
b[b<0] = 100000
cond = [[True if i == b.argmin(axis=1)[k] else False for i in range(b.shape[1])] for k in range(b.shape[0])]
idx = np.where(np.logical_and(np.logical_and(a == 1, b > 0),cond))
idx
Method 2
c = b*(b*a > 0)
idx1 = np.where(c==np.min(c[np.nonzero(c)]))
idx1
Method 1 Timing:
28.3 µs ± 418 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Method 2 Timing:
12.2 µs ± 144 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
I found a solution based on list comprehension. It is necessary to change the negative values of b to some high value though.
a = np.array([[1,0,1],[0,0,1],[0,0,0],[1,1,1]])
b = np.array([[1,-1,2],[4,-1,1],[1,-1,2],[1,2,-1]])
b[b<0] = 100000
cond = [[True if i == b.argmin(axis=1)[k] else False for i in range(b.shape[1])] for k in range(b.shape[0])]
idx = np.where(np.logical_and(np.logical_and(a == 1, b > 0),cond))
print(idx)
(array([0, 1, 3]), array([0, 2, 0]))
Please let me hear what you think.
edit: I just noticed that this solution is horribly slow.
So I want to shift the values in matrix_a according to the values in matrix_b. So if the value in matrix_b at postion 0,0 is 1, then the element in the result_matrix at 0,0 should be the element that is at 1,1 in matrix_a. I already have this working using the following code:
import numpy as np
matrix_a = np.matrix([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
matrix_b = np.matrix([[1, 1, 0],
[0,-1, 0],
[0, 0, -1]])
result_matrix = np.zeros((3,3))
for x in range(matrix_b.shape[0]):
for y in range(matrix_b.shape[1]):
value = matrix_b.item(x,y)
result_matrix[x][y]=matrix_a.item(x+value,y+value)
print(result_matrix)
which results in:
[[5. 6. 3.]
[4. 1. 6.]
[7. 8. 5.]]
Right now this is quite slow on large matrices, and I have the feeling that this can be optimized using one of numpy or scipy's functions. Can someone tell me how this can be done more efficiently?
Using np.indices
ix = np.indices(matrix_a.shape)
matrix_a[tuple(ix + np.array(matrix_b))]
Out[]:
matrix([[5, 6, 3],
[4, 1, 6],
[7, 8, 5]])
As a word of advice, try to avoid using np.matrix - it's only really for compatibility with old MATLAB code, and breaks a lot of numpy functions. np.array works just as well 99% of the time, and the rest of the time np.matrix will be confusing for core numpy users.
Here's one way with integer-indexing generated off the same iterators as open ranged arrays to get row, column indices for all elements -
I,J = np.ogrid[:matrix_b.shape[0],:matrix_b.shape[1]]
out = matrix_a[I+matrix_b, J+matrix_b]
Output for given sample -
In [152]: out
Out[152]:
matrix([[5, 6, 3],
[4, 1, 6],
[7, 8, 5]])
Timings on a large dataset 5000x5000 -
In [142]: np.random.seed(0)
...: N = 5000 # matrix size
...: matrix_a = np.random.rand(N,N)
...: matrix_b = np.random.randint(0,N,matrix_a.shape)-matrix_a.shape[1]
# #Daniel F's soln
In [143]: %%timeit
...: ix = np.indices(matrix_a.shape)
...: matrix_a[tuple(ix + np.array(matrix_b))]
1.37 s ± 99.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# Solution from this post
In [149]: %%timeit
...: I,J = np.ogrid[:matrix_b.shape[0],:matrix_b.shape[1]]
...: out = matrix_a[I+matrix_b, J+matrix_b]
1.17 s ± 3.21 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
I come to a problem like this:
suppose I have arrays like this:
a = np.array([[1,2,3,4,5,4,3,2,1],])
label = np.array([[1,0,1,0,0,1,1,0,1],])
I need to obtain the indices of a at which position the element value of label is 1 and the value of a is the largest amount all that causing label to be 1.
It maybe confusing, in the above example, the indices where label is 1 are: 0, 2, 5, 6, 8, their corresponding values of a are thus: 1, 3, 4, 3, 1, among which 4 is the larges, thus I need to get the result of 5 which is the index of number 4 in a. How could I do this with numpy ?
Get the 1s indices say as idx, then index into a with it, get max index and finally trace it back to the original order by indexing into idx -
idx = np.flatnonzero(label==1)
out = idx[a[idx].argmax()]
Sample run -
# Assuming inputs to be 1D
In [18]: a
Out[18]: array([1, 2, 3, 4, 5, 4, 3, 2, 1])
In [19]: label
Out[19]: array([1, 0, 1, 0, 0, 1, 1, 0, 1])
In [20]: idx = np.flatnonzero(label==1)
In [21]: idx[a[idx].argmax()]
Out[21]: 5
For a as ints and label as an array of 0s and 1s, we could optimize further as we could scale a based on the range of values in it, like so -
(label*(a.max()-a.min()+1) + a).argmax()
Furthermore, if a has positive numbers only, it would simplify to -
(label*(a.max()+1) + a).argmax()
Timings for positive ints largish a -
In [115]: np.random.seed(0)
...: a = np.random.randint(0,10,(100000))
...: label = np.random.randint(0,2,(100000))
In [117]: %%timeit
...: idx = np.flatnonzero(label==1)
...: out = idx[a[idx].argmax()]
1000 loops, best of 3: 592 µs per loop
In [116]: %timeit (label*(a.max()-a.min()+1) + a).argmax()
1000 loops, best of 3: 357 µs per loop
# #coldspeed's soln
In [120]: %timeit np.ma.masked_where(~label.astype(bool), a).argmax()
1000 loops, best of 3: 1.63 ms per loop
# won't work with negative numbers in a
In [119]: %timeit (label*(a.max()+1) + a).argmax()
1000 loops, best of 3: 292 µs per loop
# #klim's soln (won't work with negative numbers in a)
In [121]: %timeit np.argmax(a * (label == 1))
1000 loops, best of 3: 229 µs per loop
You can use masked arrays:
>>> np.ma.masked_where(~label.astype(bool), a).argmax()
5
Here is one of the simplest ways.
>>> np.argmax(a * (label == 1))
5
>>> np.argmax(a * (label == 1), axis=1)
array([5])
Coldspeed's method may take more time.