Comparing value with neighbor elements in numpy - python

Let's say I have a numpy array
a b c
A = i j k
u v w
I want to compare the value central element with some of its eight neighbor elements (along the axis or along the diagonal). Is there any faster way except the nested for loop (it's too slow for big matrix)?
To be more specific, what I want to do is compare value of element with it's neighbors and assign new values.
For example:
if (j == 1):
if (j>i) & (j>k):
j = 999
else:
j = 0
if (j == 2):
if (j>c) & (j>u):
j = 999
else:
j = 0
...
something like this.

Your operation contains lots of conditionals, so the most efficient way to do it in the general case (any kind of conditionals, any kind of operations) is using loops. This could be done efficiently using numba or cython. In special cases, you can implement it using higher level functions in numpy/scipy. I'll show a solution for the specific example you gave, and hopefully you can generalize from there.
Start with some fake data:
A = np.asarray([
[1, 1, 1, 2, 0],
[1, 0, 2, 2, 2],
[0, 2, 0, 1, 0],
[1, 2, 2, 1, 0],
[2, 1, 1, 1, 2]
])
We'll find locations in A where various conditions apply.
1a) The value is 1
1b) The value is greater than its horizontal neighbors
2a) The value is 2
2b) The value is greater than its diagonal neighbors
Find locations in A where the specified values occur:
cond1a = A == 1
cond2a = A == 2
This gives matrices of boolean values, of the same size as A. The value is true where the condition holds, otherwise false.
Find locations in A where each element has the specified relationships to its neighbors:
# condition 1b: value greater than horizontal neighbors
f1 = np.asarray([[1, 0, 1]])
cond1b = A > scipy.ndimage.maximum_filter(
A, footprint=f1, mode='constant', cval=-np.inf)
# condition 2b: value greater than diagonal neighbors
f2 = np.asarray([
[0, 0, 1],
[0, 0, 0],
[1, 0, 0]
])
cond2b = A > scipy.ndimage.maximum_filter(
A, footprint=f2, mode='constant', cval=-np.inf)
As before, this gives matrices of boolean values indicating where the conditions are true. This code uses scipy.ndimage.maximum_filter(). This function iteratively shifts a 'footprint' to be centered over each element of A. The returned value for that position is the maximum of all elements for which the footprint is 1. The mode argument specifies how to treat implicit values outside boundaries of the matrix, where the footprint falls off the edge. Here, we treat them as negative infinity, which is the same as ignoring them (since we're using the max operation).
Set values of the result according to the conditions. The value is 999 if conditions 1a and 1b are both true, or if conditions 2a and 2b are both true. Else, the value is 0.
result = np.zeros(A.shape)
result[(cond1a & cond1b) | (cond2a & cond2b)] = 999
The result is:
[
[ 0, 0, 0, 0, 0],
[999, 0, 0, 999, 999],
[ 0, 0, 0, 999, 0],
[ 0, 0, 999, 0, 0],
[ 0, 0, 0, 0, 999]
]
You can generalize this approach to other patterns of neighbors by changing the filter footprint. You can generalize to other operations (minimum, median, percentiles, etc.) using other kinds of filters (see scipy.ndimage). For operations that can be expressed as weighted sums, use 2d cross correlation.
This approach should be much faster than looping in python. But, it does perform unnecessary computations (for example, it's only necessary to compute the max when the value is 1 or 2, but we're doing it for all elements). Looping manually would let you avoid these computations. Looping in python would probably be much slower than the code here. But, implementing it in numba or cython would probably be faster because these tools generate compiled code.

I used numpy's:
concatenate to pad with zeroes
dstack and roll to align correctly
Apply custom_roll twice along different dimensions and subtract original.
import numpy as np
def custom_roll(a, axis=0):
n = 3
a = a.T if axis==1 else a
pad = np.zeros((n-1, a.shape[1]))
a = np.concatenate([a, pad], axis=0)
ad = np.dstack([np.roll(a, i, axis=0) for i in range(n)])
a = ad.sum(2)[1:-1, :]
a = a.T if axis==1 else a
return a
Consider the following ndarray:
A = np.arange(25).reshape(5, 5)
A
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
sum_of_eight_around_me = custom_roll(custom_roll(A), axis=1) - A
sum_of_eight_around_me
array([[ 12., 20., 25., 30., 20.],
[ 28., 48., 56., 64., 42.],
[ 53., 88., 96., 104., 67.],
[ 78., 128., 136., 144., 92.],
[ 52., 90., 95., 100., 60.]])

Related

Element-wise Cross Product of 2D arrays of Coordinates

I'm working with a dataset that stores an array of unit-vectors as arrays of the vectors' components.
How would I use vectorised code / broadcasting to write clean and compact code to give the cross product of the vectors element-wise?
For example, here's a brute force method for looping through the length of the arrays, picking out the coordinates, re-composing the two vectors, then calculating the cross product.
x = [0,0,1,1]
y = [0,1,0,1]
z = [1,0,0,1]
v1 = np.array([x,y,z])
x = [1,1,0,1]
y = [1,0,1,1]
z = [0,1,1,1]
v2 = np.array([x,y,z])
result = []
for i in range(0, len(x)):
a = [v1[0][i], v1[1][i], v1[2][i]]
b = [v2[0][i], v2[1][i], v2[2][i]]
result.append(np.cross(a,b))
result
>>>
[
array([-1, 1, 0]),
array([ 1, 0, -1]),
array([ 0, -1, 1]),
array([ 0, 0, 0])
]
I've tried to understand this question and answer to generalise it, but failed:
- Element wise cross product of vectors contained in 2 arrays with Python
np.cross can work with 2D arrays too, you just need to specify the right axes:
np.cross(v1,v2, axisa=0, axisb=0)
array([[-1, 1, 0],
[ 1, 0, -1],
[ 0, -1, 1],
[ 0, 0, 0]])

Create stack of arrays from diagonal values using numpy

I'm trying to do some matrix calculations in python and came across a problem when I tried to speed up my code using stacked arrays instead of simple for loops. I need to create a 2D-array with values (given as 1D-array) on the diagonal, but could't figure out a smart way to do it with stacked arrays.
In the old (loop) version, I used the np.diag() method, which returns exactly what I need (a 2D-array in that case) if I give the values as 1D-array as input. However, when I switched to stacked arrays my input is not a 1D-array anymore, so that the np.diag() method returns a copy of the diagonal of my 2D-input instead.
Old version with 1D input:
import numpy as np
vals = np.array([1,2,3])
mat = np.diag(vals)
print(mat.shape)
Out: (3, 3)
New version with 2D input:
vals_stack = np.repeat(np.expand_dims(vals, axis=0), 5, axis=0)
# btw: is there a better way to repeat/stack my array?
mat_stack = np.diag(vals_stack)
print(mat_stack.shape)
Out: (3,)
So you can see that np.diag() returns a 1D-array (as expected from the documentation), but I actually need a stack of 2D-arrays. So the shape of the mat_stack must be (7,3,3) and not (3,). Is there any function for that in numpy? Or do I have to loop over that additional dimension like this:
def mydiag(stack):
diag = np.zeros([stack.shape[0], stack.shape[1], stack.shape[1]])
for i in np.arange(stack.shape[0]):
diag[i,:,:] = np.diag([stack[i,:].ravel()])
return diag
In numpy you should use apply_along_axis. There is even an example at the end of the doc for your specific case (here). So the answer is :
np.apply_along_axis(np.diag, -1, vals_stack)
A more pythonic way would be something like this:
[np.diag(row) for row in vals_stack]
Is this what you had in mind:
In [499]: x = np.arange(12).reshape(4,3)
In [500]: X = np.zeros((4,3,3),int)
In [501]: X[np.arange(4)[:,None],np.arange(3), np.arange(3)] = x
In [502]: X
Out[502]:
array([[[ 0, 0, 0],
[ 0, 1, 0],
[ 0, 0, 2]],
[[ 3, 0, 0],
[ 0, 4, 0],
[ 0, 0, 5]],
[[ 6, 0, 0],
[ 0, 7, 0],
[ 0, 0, 8]],
[[ 9, 0, 0],
[ 0, 10, 0],
[ 0, 0, 11]]])
X[0,np.arange(3), np.arange(3)] indexes the diagonal on the first plane. np.arange(4)[:,None] is a (4,1) array, which broadcasts with a (3,) to index a (4,3) block, matching the size of x.

Numpy Dot product with nested array

trying to come up with a method to perform load combinations and transient load patterning for structural/civil engineering applications.
without patterning it's fairly simple:
list of load results = [[d],[t1],...,[ti]], where [ti] = transient load result as a numpy array = A
list of combos = [[1,0,....,0],[0,1,....,1], [dfi, tf1,.....,tfi]] , where tfi = code load factor for transient load = B
in python this works as numpy.dot(A,B)
so my issue arises where:
`list of load results = [[d],[t1],.....[ti]]`, where [t1] = [[t11]......[t1i]] for i pattern possibilities and [t1i] = numpy array
so I have a nested array within another array and want to multiply by a matrix of load combinations. Is there a way to implement this in one matrix operation, I can come up with a method by looping the pattern possibilities then a dot product with the load combos, but this is computationally expensive. Any thoughts?
Thanks
for an example not considering patterning see: https://github.com/buddyd16/Structural-Engineering/blob/master/Analysis/load_combo_test.py
essential I need a method that gives similar results assuming that for loads = np.array([[D],[Ex],[Ey],[F],[H],[L],[Lr],[R],[S],[Wx],[Wy]]) --> [L],[Lr],[R],[S] are actually nested arrays ie if D = 1x500 array/vector, L, Lr, R, or S could = 100x500 array.
my simple solution is:
combined_pattern = []
for pattern in load_patterns:
loads = np.array([[D],[Ex],[Ey],[F],[H],[L[pattern]],[Lr[pattern]],[R[pattern]],[S[pattern]],[Wx],[Wy]])
combined_pattern.append(np.dot(basic_factors, loads))
Simpler Example:
import numpy as np
#Simple
A = np.array([1,0,0])
B = np.array([0,1,0])
C = np.array([0,0,1])
Loads = np.array([A,B,C])
Factors = np.array([[1,1,1],[0.5,0.5,0.5],[0.25,0.25,0.25]])
result = np.dot(Factors, Loads)
# Looking for a faster way to accomplish the below operation
# this works but will be slow for large data sets
# bi can be up to 1x5000 in size and i can be up to 500
A = np.array([1,0,0])
b1 = np.array([1,0,0])
b2 = np.array([0,1,0])
b3 = np.array([0,0,1])
B = np.array([b1,b2,b3])
C = np.array([0,0,1])
result_list = []
for load in B:
Loads = np.array([A,load,C])
Factors = np.array([[1,1,1],[0.5,0.5,0.5],[0.25,0.25,0.25]])
result = np.dot(Factors, Loads)
result_list.append(result)
edit: Had Factors and Loads reversed in the np.dot().
In your simple example, the array shapes are:
In [2]: A.shape
Out[2]: (3,)
In [3]: Loads.shape
Out[3]: (3, 3)
In [4]: Factors.shape
Out[4]: (3, 3)
In [5]: result.shape
Out[5]: (3, 3)
The rule in dot is that the last dimension of Loads pairs with the 2nd to the last of Factors
result = np.dot(Loads,Factors)
(3,3) dot (3,3) => (3,3) # 3's in common
(m,n) dot (n,l) => (m,l) # n's in common
In the iteration, A,load and C are all (3,) and Loads is (3,3).
result_list is a list of 3 (3,3) arrays, and np.array(result_list) would be (3,3,3).
Let's make a 3d array of all the Loads:
In [16]: Bloads = np.array([np.array([A,load,C]) for load in B])
In [17]: Bloads.shape
Out[17]: (3, 3, 3)
In [18]: Bloads
Out[18]:
array([[[1, 0, 0],
[1, 0, 0],
[0, 0, 1]],
[[1, 0, 0],
[0, 1, 0],
[0, 0, 1]],
[[1, 0, 0],
[0, 0, 1],
[0, 0, 1]]])
I can easily do a dot of this Bloads and Factors with einsum:
In [19]: np.einsum('lkm,mn->lkn', Bloads, Factors)
Out[19]:
array([[[1. , 1. , 1. ],
[1. , 1. , 1. ],
[0.25, 0.25, 0.25]],
[[1. , 1. , 1. ],
[0.5 , 0.5 , 0.5 ],
[0.25, 0.25, 0.25]],
[[1. , 1. , 1. ],
[0.25, 0.25, 0.25],
[0.25, 0.25, 0.25]]])
einsum isn't the only way, but it's the easiest way (for me) to keep track of dimensions.
It's even easier to keep dimensions straight if they differ. Here they are all 3, so it's hard to keep them separate. But if B was (5,4) and Factors (4,2), then Bloads would be (5,3,4), and the einsum result (5,3,2) (the size 4 dropping out in the dot).
Constructing Bloads without a loop is a bit trickier, since the rows of B are interleaved with A and C.
In [38]: np.stack((A[None,:].repeat(3,0),B,C[None,:].repeat(3,0)),1)
Out[38]:
array([[[1, 0, 0],
[1, 0, 0],
[0, 0, 1]],
[[1, 0, 0],
[0, 1, 0],
[0, 0, 1]],
[[1, 0, 0],
[0, 0, 1],
[0, 0, 1]]])
To understand this test the subexpressions, e.g. A[None,:], the repeat etc.
Equivalently:
np.array((A[None,:].repeat(3,0),B,C[None,:].repeat(3,0))).transpose(1,0,2)

Python how to find unique entries and get the minimum values from a matching array

I have a numpy array, indices:
array([[ 0, 0, 0],
[ 0, 0, 0],
[ 2, 0, 2],
[ 0, 0, 0],
[ 2, 0, 2],
[95, 71, 95]])
I have another array of the same length called distances:
array([ 0.98713981, 1.04705992, 1.42340327, 74.0139111 ,
74.4285216 , 74.84623217])
All of the rows in indices have a match in the distances array. The problem is, there are duplicates in the indices array, and they have different values in the corresponding distances array. I would like to get the minimum distance for all triplets of indices, and discard the others. Therefore, with the inputs above, I want the output:
indicesOUT =
array([[ 0, 0, 0],
[ 2, 0, 2],
[95, 71, 95]])
distancesOUT=
array([ 0.98713981, 1.42340327, 74.84623217])
My current strategy is as follows:
import numpy as np
indicesOUT = []
distancesOUT = []
for i in range(6):
for j in range(6):
for k in range(6):
if len([s for s in indicesOUT if [i,j,k] == s]) == 0:
current = np.array([i, j, k])
ind = np.where((indices == current).all(-1) == True)[0]
currentDistances = distances[ind]
dist = np.amin(distances)
indicesOUT.append([i, j, k])
distancesOUT.append(dist)
The problem is, the actual arrays have about 4 million elements each, so this approach is way too slow. What is the most efficient way of doing this?
This is essentially a grouping operation, and NumPy is not well-optimized for it. Fortunately, the Pandas package has some very fast tools that can be adapted to this exact problem.
With your data above, we can do this:
import pandas as pd
def drop_duplicates(indices, distances):
data = pd.Series(distances)
grouped = data.groupby(list(indices.T)).min().reset_index()
return grouped.values[:, :3], grouped.values[:, 3]
And the output for your data is
array([[ 0., 0., 0.],
[ 2., 0., 2.],
[ 95., 71., 95.]]),
array([ 0.98713981, 1.42340327, 74.84623217])
My benchmark shows that for 4,000,000 elements, this should run in about a second:
indices = np.random.randint(0, 100, size=(4000000, 3))
distances = np.random.random(4000000)
%timeit drop_duplicates(indices, distances)
# 1 loops, best of 3: 1.15 s per loop
As written above, the input order of the indices will not necessarily be preserved; keeping the original order would require a bit more thought.

How to randomly change positions of non-zero entries of an array where certain rows are excluded

I have a numpy array consisting of a lot of 0s and a few non-zero entries e.g. like this (just a toy example):
myArray = np.array([[ 0. , 0. , 0.79],
[ 0. , 0. , 0. ],
[ 0. , 0. , 0. ],
[ 0. , 0.435 , 0. ]])
Now I would like to move each of the non-zero entries with a given probability which means that some of the entries are moved, some might remain at the current position. Some of the rows are not allowed to contain a non-zero entry which means that values are not allowed to be moved there. I implemented that as follows:
import numpy as np
# for reproducibility
np.random.seed(2)
myArray = np.array([[ 0. , 0. , 0.79],
[ 0. , 0. , 0. ],
[ 0. , 0. , 0. ],
[ 0. , 0.435 , 0. ]])
# list of rows where numbers are not allowed to be moved to
ignoreRows = [2]
# moving probability
probMove = 0.3
# get non-zero entries
nzEntries = np.nonzero(myArray)
# indices of the non-zero entries as tuples
indNZ = zip(nzEntries[0], nzEntries[1])
# store values
valNZ = [myArray[i] for i in indNZ]
# generating probabilities for moving for each non-zero entry
lProb = np.random.rand(len(nzEntries))
allowedRows = [ind for ind in xrange(myArray.shape[0]) if ind not in ignoreRows] # replace by "range" in python 3.x
allowedCols = [ind for ind in xrange(myArray.shape[1])] # replace by "range" in python 3.x
for indProb, prob in enumerate(lProb):
# only move with a certain probability
if prob <= probMove:
# randomly change position
myArray[np.random.choice(allowedRows), np.random.choice(allowedCols)] = valNZ[indProb]
# set old position to zero
myArray[indNZ[indProb]] = 0.
print myArray
First, I determine all the indices and values of the non-zero entries. Then I assign a certain probability to each of these entries which determines whether the entry will be moved. Then I get the allowed target rows.
In the second step, I loop through the list of indices and move them according to their moving probability which is done by choosing from the allowed rows and columns, assigning the respective value to these new indices and set the "old" value to 0.
It works fine with the code above, however, speed really matters in this case and I wonder whether there is a more efficient way of doing this.
EDIT:
Hpaulj's answer helped me to get rid of the for-loop which is nice and the reason why I accepted his answer. I incorporated his comments and posted an answer below as well, just in case someone else stumbles over this example and wonders how I used his answer in the end.
You can index elements with arrays, so:
valNZ=myArray[nzEntries]
can replace the zip and comprehension.
Simplify these 2 assignments:
allowedCols=np.arange(myArray.shape[1]);
allowedRows=np.delete(np.arange(myArray.shape[0]), ignoreRows)
With:
I=lProb<probMove; valNZ=valNZ[I];indNZ=indNZ[I]
you don't need to perform the prog<probMove test each time in the loop; just iterate over valNZ and indNZ.
I think your random.choice can be generated for all of these valNZ at once:
np.random.choice(np.arange(10), 10, True)
# 10 choices from the range with replacement
With that it should be possible to move all of the points without a loop.
I haven't worked out the details yet.
There is one way in which your iterative move will be different from any parallel one. If a destination choice is another value, the iterative approach can over write, and possibly move a given value a couple of times. Parallel code will not perform the sequential moves. You have to decide whether one is correct or not.
There is a ufunc method, .at, that performs unbuffered operations. It works for operations like add, but I don't know if would apply to an indexing move like this.
simplified version of the iterative moving:
In [106]: arr=np.arange(20).reshape(4,5)
In [107]: I=np.nonzero(arr>10)
In [108]: v=arr[I]
In [109]: rows,cols=np.arange(4),np.arange(5)
In [110]: for i in range(len(v)):
dest=(np.random.choice(rows),np.random.choice(cols))
arr[dest]=v[i]
arr[I[0][i],I[1][i]] = 0
In [111]: arr
Out[111]:
array([[ 0, 18, 2, 14, 11],
[ 5, 16, 7, 13, 19],
[10, 0, 0, 0, 0],
[ 0, 17, 0, 0, 0]])
possible vectorized version:
In [117]: dest=(np.random.choice(rows,len(v),True),np.random.choice(cols,len(v),True))
In [118]: dest
Out[118]: (array([1, 1, 3, 1, 3, 2, 3, 0, 0]), array([3, 0, 0, 1, 2, 3, 4, 0, 1]))
In [119]: arr[dest]
Out[119]: array([ 8, 5, 15, 6, 17, 13, 19, 0, 1])
In [120]: arr[I]=0
In [121]: arr[dest]=v
In [122]: arr
Out[122]:
array([[18, 19, 2, 3, 4],
[12, 14, 7, 11, 9],
[10, 0, 0, 16, 0],
[13, 0, 15, 0, 17]])
If I sets 0 after, there are more zeros.
In [124]: arr[dest]=v
In [125]: arr[I]=0
In [126]: arr
Out[126]:
array([[18, 19, 2, 3, 4],
[12, 14, 7, 11, 9],
[10, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0]])
same dest, but done iteratively:
In [129]: for i in range(len(v)):
.....: arr[dest[0][i],dest[1][i]] = v[i]
.....: arr[I[0][i],I[1][i]] = 0
In [130]: arr
Out[130]:
array([[18, 19, 2, 3, 4],
[12, 14, 7, 11, 9],
[10, 0, 0, 16, 0],
[ 0, 0, 0, 0, 0]])
With this small size, and high moving density, the differences between iterative and vectorized solutions are large. For a sparse array they would be fewer.
Below you can find the code I came up with after incorporating hpaulj's answer and the answer from this question. This way, I got rid of the for-loop which improved the code a lot. Therefore, I accepted hpaulj's answer. Maybe the code below helps someone else in a similar situation.
import numpy as np
from itertools import compress
# for reproducibility
np.random.seed(2)
myArray = np.array([[ 0. , 0.2 , 0.79],
[ 0. , 0. , 0. ],
[ 0. , 0. , 0. ],
[ 0. , 0.435 , 0. ]])
# list of rows where numbers are not allowed to be moved to
ignoreRows= []
# moving probability
probMove = 0.5
# get non-zero entries
nzEntries = np.nonzero(myArray)
# indices of the non-zero entries as tuples
indNZ = zip(nzEntries[0],nzEntries[1])
# store values
valNZ = myArray[nzEntries]
# generating probabilities for moving for each non-zero entry
lProb = np.random.rand(len(valNZ))
# get the rows/columns where the entries are allowed to be moved
allowedCols = np.arange(myArray.shape[1]);
allowedRows = np.delete(np.arange(myArray.shape[0]), ignoreRows)
# get the entries that are actually moved
I = lProb < probMove
print I
# get the values of the entries that are moved
valNZ = valNZ[I]
# get the indices of the entries that are moved
indNZ = list(compress(indNZ, I))
# get the destination for the entries that are moved
dest = (np.random.choice(allowedRows, len(valNZ), True), np.random.choice(allowedCols, len(valNZ), True))
print myArray
print indNZ
print dest
# set the old indices to 0
myArray[zip(*indNZ)] = 0
# move the values to their respective destination
myArray[dest] = valNZ
print myArray

Categories