Suppose to have, a numpy 3D tensor D of dimension r x c x d, such as:
r = 2
c = 3
d = 3
D = np.array([[[1, 5, 3], [1, 2, 5], [1, 4, 3]], [[1, 1, 6], [3, 1, 7], [5, 1, 3]]])
array([[[1, 5, 3],
[1, 2, 5],
[1, 4, 3]],
[[1, 1, 6],
[3, 1, 7],
[5, 1, 3]]])
and a 2D integer matrix Q of dimensions r x c, such as:
Q = np.array([[1, 1, 2], [2, 1, 2]])
array([[1, 1, 2],
[2, 1, 2]])
where every element in Q is less than d.
I need to sum the first Q[r_i][c_i] element of the third dimension of matrix D for every 0 < r_i < r and 0 < c_i < c.
The expected results (Res) using the example above is a 2D matrix of r x c (2x3):
Res = np.array([[6, 3, 8], [8, 4, 5]])
array([[6, 3, 8],
[8, 4, 5]])
My actual solution is using a list comprehension looping over r_i and c_i:
r = 2
c = 3
res = np.array([[np.sum(D[r_i, c_i, :Q[r_i, c_i]+1]) for c_i in range(c)] for r_i in range(r)])
There is a more efficient or elegant solution to solve this problem?
Let us try:
# this is equivalent to double loop on r_i, c_i
x,y = np.ogrid[:r, :c]
# we take the cumsum on the last axis,
# then extract the Q[r_i, c_i]'th sum at r_i, c_i
out = D.cumsum(axis=-1)[x,y, Q]
Output:
array([[6, 3, 8],
[8, 4, 9]])
Cross check
np.allclose(out, res)
# True
Related
For example you are given the array:
array = [[2, 1, 4],
[1, 3, 7],
[7, 1, 4]]
and want to print each vertical column as a separate list:
res1 = [2, 1, 7]
res2 = [1, 3, 1]
res3 = [4, 7, 4]
what would be the most efficient way to code this for any size 2d array?
If your 2D array is large and want lots of computation on it, better let numpy handle it
import numpy as np
array = np.array([[2, 1, 4],
[1, 3, 7],
[7, 1, 4]])
for col in array.T:
print(col)
for i in range(len(array[0])):
print("Row {} : {}".format(i+1, array[i]))
Output
Row 1 : [2, 1, 4]
Row 2 : [1, 3, 7]
Row 3 : [7, 1, 4]
You can use this code,
for x in range(len(array)):
print("Row",x+1,":",array[x])
If I have a nested list, e.g. x = [[1, 2, 3], [2, 4, 6], [3, 5, 7]], how can I calculate the difference between all of them? Let's called the lists inside x - A, B, and C. I want to calculate the difference of A from B & C, then B from A & C, then C from A & B, then put them in a list diff = [].
My problem is correctly indexing the numbers and using them to do maths with corresponding elements in other lists.
This is what I have so far:
for i in range(len(x)):
diff = []
for j in range(len(x)):
if x[i]!=x[j]:
a = x[i]
b = x[j]
for h in range(len(a)):
d = a[h] - b[h]
diff.append(d)
Essentially for the difference of A to B it is ([1-2] + [2-4] + [3-6])
I would like it to return: diff = [[diff(A,B), diff(A,C)], [diff(B,A), diff(B,C)], [diff(C,A), diff(C,B)]] with the correct differences between points.
Thanks in advance!
Your solution is actually not that far off. As Aniketh mentioned, one issue is your use of x[i] != x[j]. Since x[i] and x[j] are arrays, that will actually always evaluate to false.
The reason is that python will not do a useful comparison of arrays by default. It will just check if the array reference is the same. This is obviously not what you want, you are trying to see if the array is at the same index in x. For that use i !=j.
Though there are other solutions posted here, I'll add mine below because I already wrote it. It makes use of python's list comprehensions.
def pairwise_diff(x):
diff = []
for i in range(len(x)):
A = x[i]
for j in range(len(x)):
if i != j:
B = x[j]
assert len(A) == len(B)
item_diff = [A[i] - B[i] for i in range(len(A))]
diff.append(sum(item_diff))
# Take the answers and group them into arrays of length 2
return [diff[i : i + 2] for i in range(0, len(diff), 2)]
x = [[1, 2, 3], [2, 4, 6], [3, 5, 7]]
print(pairwise_diff(x))
This is one of those problems where it's really helpful to know a bit of Python's standard library — especially itertools.
For example to get the pairs of lists you want to operate on, you can reach for itertools.permutations
x = [[1, 2, 3], [2, 4, 6], [3, 5, 7]]
list(permutations(x, r=2))
This gives the pairs of lists your want:
[([1, 2, 3], [2, 4, 6]),
([1, 2, 3], [3, 5, 7]),
([2, 4, 6], [1, 2, 3]),
([2, 4, 6], [3, 5, 7]),
([3, 5, 7], [1, 2, 3]),
([3, 5, 7], [2, 4, 6])]
Now, if you could just group those by the first of each pair...itertools.groupby does just this.
x = [[1, 2, 3], [2, 4, 6], [3, 5, 7]]
list(list(g) for k, g in groupby(permutations(x, r=2), key=lambda p: p[0]))
Which produces a list of lists grouped by the first:
[[([1, 2, 3], [2, 4, 6]), ([1, 2, 3], [3, 5, 7])],
[([2, 4, 6], [1, 2, 3]), ([2, 4, 6], [3, 5, 7])],
[([3, 5, 7], [1, 2, 3]), ([3, 5, 7], [2, 4, 6])]]
Putting it all together, you can make a simple function that subtracts the lists the way you want and pass each pair in:
from itertools import permutations, groupby
def sum_diff(pairs):
return [sum(p - q for p, q in zip(*pair)) for pair in pairs]
x = [[1, 2, 3], [2, 4, 6], [3, 5, 7]]
# call sum_diff for each group of pairs
result = [sum_diff(g) for k, g in groupby(permutations(x, r=2), key=lambda p: p[0])]
# [[-6, -9], [6, -3], [9, 3]]
This reduces the problem to just a couple lines of code and will be performant on large lists. And, since you mentioned the difficulty in keeping indices straight, notice that this uses no indices in the code other than selecting the first element for grouping.
Here is the code I believe you're looking for. I will explain it below:
def diff(a, b):
total = 0
for i in range(len(a)):
total += a[i] - b[i]
return total
x = [[1, 2, 3], [2, 4, 6], [3, 5, 7]]
differences = []
for i in range(len(x)):
soloDiff = []
for j in range(len(x)):
if i != j:
soloDiff.append(diff(x[i],x[j]))
differences.append(soloDiff)
print(differences)
Output:
[[-6, -9], [6, -3], [9, 3]]
First off, in your explanation of your algorithm, you are making it very clear that you should use a function to calculate the differences between two lists since you will be using it repeatedly.
Your for loops start off fine, but you should have a second list to append diff to 3 times. Also, when you are checking for repeats you need to make sure that i != j, not x[i] != x[j]
Let me know if you have any other questions!!
this is the simplest solution i can think:
import numpy as np
x = [[1, 2, 3], [2, 4, 6], [3, 5, 7]]
x = np.array(x)
vectors = ['A','B','C']
for j in range(3):
for k in range(3):
if j!=k:
print(vectors[j],'-',vectors[k],'=', x[j]-x[k])
which will return
A - B = [-1 -2 -3]
A - C = [-2 -3 -4]
B - A = [1 2 3]
B - C = [-1 -1 -1]
C - A = [2 3 4]
C - B = [1 1 1]
I have a 2D numpy array that I want to extract a submatrix from.
I get the submatrix by slicing the array as below.
Here I want a 3*3 submatrix around an item at the index of (2,3).
>>> import numpy as np
>>> a = np.array([[0, 1, 2, 3],
... [4, 5, 6, 7],
... [8, 9, 0, 1],
... [2, 3, 4, 5]])
>>> a[1:4, 2:5]
array([[6, 7],
[0, 1],
[4, 5]])
But what I want is that for indexes that are out of range, it goes back to the beginning of array and continues from there. This is the result I want:
array([[6, 7, 4],
[0, 1, 8],
[4, 5, 2]])
I know that I can do things like getting mod of the index to the width of the array; but I'm looking for a numpy function that does that.
And also for an one dimensional array this will cause an index out of range error, which is not really useful...
This is one way using np.pad with wraparound mode.
>>> a = np.array([[0, 1, 2, 3],
[4, 5, 6, 7],
[8, 9, 0, 1],
[2, 3, 4, 5]])
>>> pad_width = 1
>>> i, j = 2, 3
>>> startrow, endrow = i-1+pad_width, i+2+pad_width # for 3 x 3 submatrix
>>> startcol, endcol = j-1+pad_width, j+2+pad_width
>>> np.pad(a, (pad_width, pad_width), 'wrap')[startrow:endrow, startcol:endcol]
array([[6, 7, 4],
[0, 1, 8],
[4, 5, 2]])
Depending on the shape of your patch (eg. 5 x 5 instead of 3 x 3) you can increase the pad_width and start and end row and column indices accordingly.
np.take does have a mode parameter which can wrap-around out of bound indices. But it's a bit hacky to use np.take for multidimensional arrays since the axis must be a scalar.
However, In your particular case you could do this:
a = np.array([[0, 1, 2, 3],
[4, 5, 6, 7],
[8, 9, 0, 1],
[2, 3, 4, 5]])
np.take(a, np.r_[2:5], axis=1, mode='wrap')[1:4]
Output:
array([[6, 7, 4],
[0, 1, 8],
[4, 5, 2]])
EDIT
This function might be what you are looking for (?)
def select3x3(a, idx):
x,y = idx
return np.take(np.take(a, np.r_[x-1:x+2], axis=0, mode='wrap'), np.r_[y-1:y+2], axis=1, mode='wrap')
But in retrospect, i recommend using modulo and fancy indexing for this kind of operation (it's basically what the mode='wrap' is doing internally anyways):
def select3x3(a, idx):
x,y = idx
return a[np.r_[x-1:x+2][:,None] % a.shape[0], np.r_[y-1:y+2][None,:] % a.shape[1]]
The above solution is also generalized for any 2d shape on a.
I have two ndarrays, where the length of the first dimension of X is the same as the size of y:
X = np.asarray([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[3, 6, 1]])
y = np.asarray([1, 0, 2, 3])
and I have a list:
l = [0, 2, 7]
I want to delete every row from X , if the value of row of the same index from y is in l.
So in that case, I will have:
X = np.asarray([[1, 2, 3],
[3, 6, 1]])
That is because the 2nd and 3rd elements from y - are in l. Therefore, the 2nd and 3rd rows should be deleted from X.
How can it be done?
A simple one-liner solution would be using delete and argwhere
X = np.delete(X, np.argwhere(np.isin(y, l)).flatten(), axis=0)
Output
array([[1, 2, 3],
[3, 6, 1]])
Consider the array a
np.random.seed([3,1415])
a = np.random.randint(0, 10, (10, 2))
a
array([[0, 2],
[7, 3],
[8, 7],
[0, 6],
[8, 6],
[0, 2],
[0, 4],
[9, 7],
[3, 2],
[4, 3]])
What is a vectorized way to get the cumulative argmax?
array([[0, 0], <-- both start off as max position
[1, 1], <-- 7 > 0 so 1st col = 1, 3 > 2 2nd col = 1
[2, 2], <-- 8 > 7 1st col = 2, 7 > 3 2nd col = 2
[2, 2], <-- 0 < 8 1st col stays the same, 6 < 7 2nd col stays the same
[2, 2],
[2, 2],
[2, 2],
[7, 2], <-- 9 is new max of 2nd col, argmax is now 7
[7, 2],
[7, 2]])
Here is a non-vectorized way to do it.
Notice that as the window expands, argmax applies to the growing window.
pd.DataFrame(a).expanding().apply(np.argmax).astype(int).values
array([[0, 0],
[1, 1],
[2, 2],
[2, 2],
[2, 2],
[2, 2],
[2, 2],
[7, 2],
[7, 2],
[7, 2]])
Here's a vectorized pure NumPy solution that performs pretty snappily:
def cumargmax(a):
m = np.maximum.accumulate(a)
x = np.repeat(np.arange(a.shape[0])[:, None], a.shape[1], axis=1)
x[1:] *= m[:-1] < m[1:]
np.maximum.accumulate(x, axis=0, out=x)
return x
Then we have:
>>> cumargmax(a)
array([[0, 0],
[1, 1],
[2, 2],
[2, 2],
[2, 2],
[2, 2],
[2, 2],
[7, 2],
[7, 2],
[7, 2]])
Some quick testing on arrays with thousands to millions of values suggests that this is anywhere between 10-50 times faster than looping at the Python level (either implicitly or explicitly).
I cant think of a way to vectorize this over both columns easily; but if the number of columns is small relative to the number of rows, that shouldn't be an issue and a for loop should suffice for that axis:
import numpy as np
import numpy_indexed as npi
a = np.random.randint(0, 10, (10))
max = np.maximum.accumulate(a)
idx = npi.indices(a, max)
print(idx)
I would like to make a function that computes cumulative argmax for 1d array and then apply it to all columns. This is the code:
import numpy as np
np.random.seed([3,1415])
a = np.random.randint(0, 10, (10, 2))
def cumargmax(v):
uargmax = np.frompyfunc(lambda i, j: j if v[j] > v[i] else i, 2, 1)
return uargmax.accumulate(np.arange(0, len(v)), 0, dtype=np.object).astype(v.dtype)
np.apply_along_axis(cumargmax, 0, a)
The reason for converting to np.object and then converting back is a workaround for Numpy 1.9, as mentioned in generalized cumulative functions in NumPy/SciPy?