Switch value of elements on some positions - python

I have a numpy array X of size N, filled with 0 and 1.
I generate a sample S of size M
I want to revert the elements of X on each position from sample S.
I want to ask whether this is possible without using loops, but using some atomic operation from the numpy mask module.
I want to any type of loop like
for i in sample:
X[i] = 1-X[i]
and replace it with a single call in pylab.
Possible ?

Use X[sample] = 1 - X[sample].
For example:
>>> import numpy as np
>>> X = np.array([1, 1, 0, 1, 1])
>>> sample = [1,2,3]
>>> X[sample]
array([1, 0, 1])
>>> X[sample] = 1 - X[sample]
>>> X
array([1, 0, 1, 0, 1])

Related

Replacing array at i`th dimension

Let's say I have a two-dimensional array
import numpy as np
a = np.array([[1, 1, 1], [2,2,2], [3,3,3]])
and I would like to replace the third vector (in the second dimension) with zeros. I would do
a[:, 2] = np.array([0, 0, 0])
But what if I would like to be able to do that programmatically? I mean, let's say that variable x = 1 contained the dimension on which I wanted to do the replacing. How would the function replace(arr, dimension, value, arr_to_be_replaced) have to look if I wanted to call it as replace(a, x, 2, np.array([0, 0, 0])?
numpy has a similar function, insert. However, it doesn't replace at dimension i, it returns a copy with an additional vector.
All solutions are welcome, but I do prefer a solution that doesn't recreate the array as to save memory.
arr[:, 1]
is basically shorthand for
arr[(slice(None), 1)]
that is, a tuple with slice elements and integers.
Knowing that, you can construct a tuple of slice objects manually, adjust the values depending on an axis parameter and use that as your index. So for
import numpy as np
arr = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]])
axis = 1
idx = 2
arr[:, idx] = np.array([0, 0, 0])
# ^- axis position
you can use
slices = [slice(None)] * arr.ndim
slices[axis] = idx
arr[tuple(slices)] = np.array([0, 0, 0])

Filtering of array elements by another array in numpy

Here a simple example
import numpy as np
x=np.random.rand(5,5)
k,p = np.where(x>0.5)
k and p are arrays of indices
Now I have a list of rows which should be considered m=[0,2,4], so I need to find all entries of k which are in the list m.
I came up with a very simple but horrible inefficient solution
d = np.array([ (a,b) for a,b in zip(k,p) if a in m])
The solution works, but very slow. I’m looking for a better and more efficient one. I need to do a few millions of such operations with dynamically adjusted m, so efficiency of an algorithm is really a critical question.
Maybe the below is faster:
d=np.dstack((k,p))[0]
print(d[np.isin(d[:,0],m)])
You could use isin() to get a boolean mask which you can use to index k.
>>> x=np.random.rand(3,3)
>>> x
array([[0.74043564, 0.48328081, 0.82396324],
[0.40693944, 0.24951958, 0.18043229],
[0.46623863, 0.53559775, 0.98956277]])
>>> k, p = np.where(x > 0.5)
>>> p
array([0, 2, 1, 2])
>>> k
array([0, 0, 2, 2])
>>> m
array([0, 1])
>>> np.isin(k, m)
array([ True, True, False, False])
>>> k[np.isin(k, m)]
array([0, 0])
How about:
import numpy as np
m = np.array([0, 2, 4])
k, p = np.where(x[m, :] > 0.5)
k = m[k]
print(zip(k, p))
This only considers the interesting rows (and then zips them to 2d indices).

array/list/tuple of nparrays with variable length?

In Python, I have the following problem, made into a toy example:
import random
import numpy as np
x_arr = np.array([], dtype = object)
for x in range(5):
y_arr = np.array([], dtype=object)
for y in range(5):
r = random.random()
if r < 0.5:
y_arr = np.append(y_arr,y)
if random.random() < 0.9:
x_arr = np.append(x_arr, y_arr)
#This results in
>>> x_arr
array([4, 0, 1, 2, 4, 0, 3, 4], dtype=object)
I would like to have
array([array([4]), array([0, 1, 2, 4]), array([0, 3, 4]), dtype=object)
So apparently, in this run 3 out of 5 (variable) times the array $y_arr$ is written into $x_arr$, having lengths 1,4, and 3 (variable).
append() puts the results in one long 1D-structure, where I would like to keep it 2D. Also, considering the example, it might be that no numbers get written at all (if you are 'unlucky' with the random numbers). So i have an a priori unknown array of arrays with, each of those, a priori unknown number of elements. How would I approach this in Python, other than finding an upperbound on both and store a lot of zeros?
You might do it in a two step process? First add an element, then set the element. This circumvents the automatic flatten which happens in np.append() when axis=None (default behavior), as documented here.
import random
import numpy as np
x_arr = np.array([], dtype = object).reshape((1,0))
for x in range(5):
y_arr = np.array([], dtype=np.int32)
for y in range(5):
r = random.random()
if r < 0.5:
y_arr = np.append(y_arr,y)
if random.random() < 0.9:
x_arr = np.append(x_arr, 0)
x_arr[-1] = y_arr
print type(x_arr)
print x_arr
This gives:
<type 'numpy.ndarray'>
[array([0, 1, 2]) array([0, 1, 2, 3]) array([0, 1, 4]) array([0, 1, 3, 4])
array([2, 3])]
Also, why not use a python list for x_arr (or y_arr?). Nested numpy arrays are not really useful when they are not ndarrays.

Roll rows of a matrix independently

I have a matrix (2d numpy ndarray, to be precise):
A = np.array([[4, 0, 0],
[1, 2, 3],
[0, 0, 5]])
And I want to roll each row of A independently, according to roll values in another array:
r = np.array([2, 0, -1])
That is, I want to do this:
print np.array([np.roll(row, x) for row,x in zip(A, r)])
[[0 0 4]
[1 2 3]
[0 5 0]]
Is there a way to do this efficiently? Perhaps using fancy indexing tricks?
Sure you can do it using advanced indexing, whether it is the fastest way probably depends on your array size (if your rows are large it may not be):
rows, column_indices = np.ogrid[:A.shape[0], :A.shape[1]]
# Use always a negative shift, so that column_indices are valid.
# (could also use module operation)
r[r < 0] += A.shape[1]
column_indices = column_indices - r[:, np.newaxis]
result = A[rows, column_indices]
numpy.lib.stride_tricks.as_strided stricks (abbrev pun intended) again!
Speaking of fancy indexing tricks, there's the infamous - np.lib.stride_tricks.as_strided. The idea/trick would be to get a sliced portion starting from the first column until the second last one and concatenate at the end. This ensures that we can stride in the forward direction as needed to leverage np.lib.stride_tricks.as_strided and thus avoid the need of actually rolling back. That's the whole idea!
Now, in terms of actual implementation we would use scikit-image's view_as_windows to elegantly use np.lib.stride_tricks.as_strided under the hoods. Thus, the final implementation would be -
from skimage.util.shape import view_as_windows as viewW
def strided_indexing_roll(a, r):
# Concatenate with sliced to cover all rolls
a_ext = np.concatenate((a,a[:,:-1]),axis=1)
# Get sliding windows; use advanced-indexing to select appropriate ones
n = a.shape[1]
return viewW(a_ext,(1,n))[np.arange(len(r)), (n-r)%n,0]
Here's a sample run -
In [327]: A = np.array([[4, 0, 0],
...: [1, 2, 3],
...: [0, 0, 5]])
In [328]: r = np.array([2, 0, -1])
In [329]: strided_indexing_roll(A, r)
Out[329]:
array([[0, 0, 4],
[1, 2, 3],
[0, 5, 0]])
Benchmarking
# #seberg's solution
def advindexing_roll(A, r):
rows, column_indices = np.ogrid[:A.shape[0], :A.shape[1]]
r[r < 0] += A.shape[1]
column_indices = column_indices - r[:,np.newaxis]
return A[rows, column_indices]
Let's do some benchmarking on an array with large number of rows and columns -
In [324]: np.random.seed(0)
...: a = np.random.rand(10000,1000)
...: r = np.random.randint(-1000,1000,(10000))
# #seberg's solution
In [325]: %timeit advindexing_roll(a, r)
10 loops, best of 3: 71.3 ms per loop
# Solution from this post
In [326]: %timeit strided_indexing_roll(a, r)
10 loops, best of 3: 44 ms per loop
In case you want more general solution (dealing with any shape and with any axis), I modified #seberg's solution:
def indep_roll(arr, shifts, axis=1):
"""Apply an independent roll for each dimensions of a single axis.
Parameters
----------
arr : np.ndarray
Array of any shape.
shifts : np.ndarray
How many shifting to use for each dimension. Shape: `(arr.shape[axis],)`.
axis : int
Axis along which elements are shifted.
"""
arr = np.swapaxes(arr,axis,-1)
all_idcs = np.ogrid[[slice(0,n) for n in arr.shape]]
# Convert to a positive shift
shifts[shifts < 0] += arr.shape[-1]
all_idcs[-1] = all_idcs[-1] - shifts[:, np.newaxis]
result = arr[tuple(all_idcs)]
arr = np.swapaxes(result,-1,axis)
return arr
I implement a pure numpy.lib.stride_tricks.as_strided solution as follows
from numpy.lib.stride_tricks import as_strided
def custom_roll(arr, r_tup):
m = np.asarray(r_tup)
arr_roll = arr[:, [*range(arr.shape[1]),*range(arr.shape[1]-1)]].copy() #need `copy`
strd_0, strd_1 = arr_roll.strides
n = arr.shape[1]
result = as_strided(arr_roll, (*arr.shape, n), (strd_0 ,strd_1, strd_1))
return result[np.arange(arr.shape[0]), (n-m)%n]
A = np.array([[4, 0, 0],
[1, 2, 3],
[0, 0, 5]])
r = np.array([2, 0, -1])
out = custom_roll(A, r)
Out[789]:
array([[0, 0, 4],
[1, 2, 3],
[0, 5, 0]])
By using a fast fourrier transform we can apply a transformation in the frequency domain and then use the inverse fast fourrier transform to obtain the row shift.
So this is a pure numpy solution that take only one line:
import numpy as np
from numpy.fft import fft, ifft
# The row shift function using the fast fourrier transform
# rshift(A,r) where A is a 2D array, r the row shift vector
def rshift(A,r):
return np.real(ifft(fft(A,axis=1)*np.exp(2*1j*np.pi/A.shape[1]*r[:,None]*np.r_[0:A.shape[1]][None,:]),axis=1).round())
This will apply a left shift, but we can simply negate the exponential exponant to turn the function into a right shift function:
ifft(fft(...)*np.exp(-2*1j...)
It can be used like that:
# Example:
A = np.array([[1,2,3,4],
[1,2,3,4],
[1,2,3,4]])
r = np.array([1,-1,3])
print(rshift(A,r))
Building on divakar's excellent answer, you can apply this logic to 3D array easily (which was the problematic that brought me here in the first place). Here's an example - basically flatten your data, roll it & reshape it after::
def applyroll_30(cube, threshold=25, offset=500):
flattened_cube = cube.copy().reshape(cube.shape[0]*cube.shape[1], cube.shape[2])
roll_matrix = calc_roll_matrix_flattened(flattened_cube, threshold, offset)
rolled_cube = strided_indexing_roll(flattened_cube, roll_matrix, cube_shape=cube.shape)
rolled_cube = triggered_cube.reshape(cube.shape[0], cube.shape[1], cube.shape[2])
return rolled_cube
def calc_roll_matrix_flattened(cube_flattened, threshold, offset):
""" Calculates the number of position along time axis we need to shift
elements in order to trig the data.
We return a 1D numpy array of shape (X*Y, time) elements
"""
# armax(...) finds the position in the cube (3d) where we are above threshold
roll_matrix = np.argmax(cube_flattened > threshold, axis=1) + offset
# ensure we don't have index out of bound
roll_matrix[roll_matrix>cube_flattened.shape[1]] = cube_flattened.shape[1]
return roll_matrix
def strided_indexing_roll(cube_flattened, roll_matrix_flattened, cube_shape):
# Concatenate with sliced to cover all rolls
# otherwise we shift in the wrong direction for my application
roll_matrix_flattened = -1 * roll_matrix_flattened
a_ext = np.concatenate((cube_flattened, cube_flattened[:, :-1]), axis=1)
# Get sliding windows; use advanced-indexing to select appropriate ones
n = cube_flattened.shape[1]
result = viewW(a_ext,(1,n))[np.arange(len(roll_matrix_flattened)), (n - roll_matrix_flattened) % n, 0]
result = result.reshape(cube_shape)
return result
Divakar's answer doesn't do justice to how much more efficient this is on large cube of data. I've timed it on a 400x400x2000 data formatted as int8. An equivalent for-loop does ~5.5seconds, Seberg's answer ~3.0seconds and strided_indexing.... ~0.5second.

Nonzero function help, Python Numpy

I have two arrays, and I have a complex condition like this: new_arr<0 and old_arr>0
I am using nonzero but I am getting an error. The code I have is this:
indices = nonzero(new_arr<0 and old_arr>0)
I tried:
indices = nonzero(new_arr<0) and nonzero(old_arr>0)
But it gave me incorrect results.
Is there any way around this? And is there a way to get the common indices from two nonzero statements. For example, if:
indices1 = nonzero(new_arr<0)
indices2 = nonzero(old_arr>0)
and these two indices would contain:
indices1 = array([0, 1, 3])
indices2 = array([2, 3, 4])
The correct result would be getting the common element from these two (in this case it would be the element 3). Something like this:
result = common(indices1, indices2)
Try indices = nonzero((new_arr < 0) & (old_arr > 0)):
In [5]: import numpy as np
In [6]: old_arr = np.array([ 0,-1, 0,-1, 1, 1, 0, 1])
In [7]: new_arr = np.array([ 1, 1,-1,-1,-1,-1, 1, 1])
In [8]: np.nonzero((new_arr < 0) & (old_arr > 0))
Out[8]: (array([4, 5]),)
Try
indices = nonzero(logical_and(new < 0, old > 0))
(Thinking about it, my previous example wasn't all that useful if all it did was return nonzero(condition) anyway.)

Categories