This is an extension of the question posed here (quoted below)
I have a matrix (2d numpy ndarray, to be precise):
A = np.array([[4, 0, 0],
[1, 2, 3],
[0, 0, 5]])
And I want to roll each row of A independently, according to roll
values in another array:
r = np.array([2, 0, -1])
That is, I want to do this:
print np.array([np.roll(row, x) for row,x in zip(A, r)])
[[0 0 4]
[1 2 3]
[0 5 0]]
Is there a way to do this efficiently? Perhaps using fancy indexing
tricks?
The accepted solution was:
rows, column_indices = np.ogrid[:A.shape[0], :A.shape[1]]
# Use always a negative shift, so that column_indices are valid.
# (could also use module operation)
r[r < 0] += A.shape[1]
column_indices = column_indices - r[:,np.newaxis]
result = A[rows, column_indices]
I would basically like to do the same thing, except when an index gets rolled "past" the end of the row, I would like the other side of the row to be padded with a NaN, rather than the value move to the "front" of the row in a periodic fashion.
Maybe using np.pad somehow? But I can't figure out how to get that to pad different rows by different amounts.
Inspired by Roll rows of a matrix independently's solution, here's a vectorized one based on np.lib.stride_tricks.as_strided -
from skimage.util.shape import view_as_windows as viewW
def strided_indexing_roll(a, r):
# Concatenate with sliced to cover all rolls
p = np.full((a.shape[0],a.shape[1]-1),np.nan)
a_ext = np.concatenate((p,a,p),axis=1)
# Get sliding windows; use advanced-indexing to select appropriate ones
n = a.shape[1]
return viewW(a_ext,(1,n))[np.arange(len(r)), -r + (n-1),0]
Sample run -
In [76]: a
Out[76]:
array([[4, 0, 0],
[1, 2, 3],
[0, 0, 5]])
In [77]: r
Out[77]: array([ 2, 0, -1])
In [78]: strided_indexing_roll(a, r)
Out[78]:
array([[nan, nan, 4.],
[ 1., 2., 3.],
[ 0., 5., nan]])
I was able to hack this together with linear indexing...it gets the right result but performs rather slowly on large arrays.
A = np.array([[4, 0, 0],
[1, 2, 3],
[0, 0, 5]]).astype(float)
r = np.array([2, 0, -1])
rows, column_indices = np.ogrid[:A.shape[0], :A.shape[1]]
# Use always a negative shift, so that column_indices are valid.
# (could also use module operation)
r_old = r.copy()
r[r < 0] += A.shape[1]
column_indices = column_indices - r[:,np.newaxis]
result = A[rows, column_indices]
# replace with NaNs
row_length = result.shape[-1]
pad_inds = []
for ind,i in np.enumerate(r_old):
if i > 0:
inds2pad = [np.ravel_multi_index((ind,) + (j,),result.shape) for j in range(i)]
pad_inds.extend(inds2pad)
if i < 0:
inds2pad = [np.ravel_multi_index((ind,) + (j,),result.shape) for j in range(row_length+i,row_length)]
pad_inds.extend(inds2pad)
result.ravel()[pad_inds] = nan
Gives the expected result:
print result
[[ nan nan 4.]
[ 1. 2. 3.]
[ 0. 5. nan]]
Based on #Seberg and #yann-dubois answers in the non-nan case, I've written a method that:
Is faster than the current answer
Works on ndarrays of any shape (specify the row-axis using the axis argument)
Allows for setting fill to either np.nan, any other "fill value" or False to allow regular rolling across the array edge.
Benchmarking
cols, rows = 1024, 2048
arr = np.stack(rows*(np.arange(cols,dtype=float),))
shifts = np.random.randint(-cols, cols, rows)
np.testing.assert_array_almost_equal(row_roll(arr, shifts), strided_indexing_roll(arr, shifts))
# True
%timeit row_roll(arr, shifts)
# 25.9 ms ± 161 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit strided_indexing_roll(arr, shifts)
# 29.7 ms ± 446 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
def row_roll(arr, shifts, axis=1, fill=np.nan):
"""Apply an independent roll for each dimensions of a single axis.
Parameters
----------
arr : np.ndarray
Array of any shape.
shifts : np.ndarray, dtype int. Shape: `(arr.shape[:axis],)`.
Amount to roll each row by. Positive shifts row right.
axis : int
Axis along which elements are shifted.
fill: bool or float
If True, value to be filled at missing values. Otherwise just rolls across edges.
"""
if np.issubdtype(arr.dtype, int) and isinstance(fill, float):
arr = arr.astype(float)
shifts2 = shifts.copy()
arr = np.swapaxes(arr,axis,-1)
all_idcs = np.ogrid[[slice(0,n) for n in arr.shape]]
# Convert to a positive shift
shifts2[shifts2 < 0] += arr.shape[-1]
all_idcs[-1] = all_idcs[-1] - shifts2[:, np.newaxis]
result = arr[tuple(all_idcs)]
if fill is not False:
# Create mask of row positions above negative shifts
# or below positive shifts. Then set them to np.nan.
*_, nrows, ncols = arr.shape
mask_neg = shifts < 0
mask_pos = shifts >= 0
shifts_pos = shifts.copy()
shifts_pos[mask_neg] = 0
shifts_neg = shifts.copy()
shifts_neg[mask_pos] = ncols+1 # need to be bigger than the biggest positive shift
shifts_neg[mask_neg] = shifts[mask_neg] % ncols
indices = np.stack(nrows*(np.arange(ncols),))
nanmask = (indices < shifts_pos[:, None]) | (indices >= shifts_neg[:, None])
result[nanmask] = fill
arr = np.swapaxes(result,-1,axis)
return arr
Related
I am trying to create permutations of size 4 from a group of real numbers. After that, I'd like to know the position of the first element in a permutation after I sort it. Here is what I have tried so far. What's the best way to do this?
import numpy as np
from itertools import chain, permutations
N_PLAYERS = 4
N_STATES = 60
np.random.seed(0)
state_space = np.linspace(0.0, 1.0, num=N_STATES, retstep=True)[0].tolist()
perms = permutations(state_space, N_PLAYERS)
perms_arr = np.fromiter(chain(*perms),dtype=np.float16)
def loc(row):
return np.where(np.argsort(row) == 0)[0].tolist()[0]
locs = np.apply_along_axis(loc, 0, perms)
In [153]: N_PLAYERS = 4
...: N_STATES = 60
...: np.random.seed(0)
...: state_space = np.linspace(0.0, 1.0, num=N_STATES, retstep=True)[0].tolist()
...: perms = itertools.permutations(state_space, N_PLAYERS)
In [154]: alist = list(perms)
In [155]: len(alist)
Out[155]: 11703240
Simply making a list from the permuations produces a list of lists, with all sublists of length N_PLAYERS.
Making an array from that with chain flattens it:
In [156]: perms = itertools.permutations(state_space, N_PLAYERS)
In [158]: perms_arr = np.fromiter(itertools.chain(*perms),dtype=np.float16)
In [159]: perms_arr.shape
Out[159]: (46812960,)
In [160]: alist[0]
Which could be reshaped to (11703240,4).
Using apply on that 1d array doesn't work (or make sense):
In [170]: perms_arr.shape
Out[170]: (46812960,)
In [171]: locs = np.apply_along_axis(loc, 0, perms_arr)
In [172]: locs.shape
Out[172]: ()
Reshape to 4 columns:
In [173]: locs = np.apply_along_axis(loc, 0, perms_arr.reshape(-1,4))
In [174]: locs.shape
Out[174]: (4,)
In [175]: locs
Out[175]: array([ 0, 195054, 578037, 769366])
This applies loc to each column, returning one value for each. But loc has a row variable. Is that supposed to be significant?
I could switch the axis; this takes much longer, and al
In [176]: locs = np.apply_along_axis(loc, 1, perms_arr.reshape(-1,4))
In [177]: locs.shape
Out[177]: (11703240,)
list comprehension
This iteration does the same thing as your apply_along_axis, and I expect is faster (though I haven't timed it - it's too slow).
In [188]: locs1 = np.array([loc(row) for row in perms_arr.reshape(-1,4)])
In [189]: np.allclose(locs, locs1)
Out[189]: True
whole array sort
But argsort takes an axis, so I can sort all rows at once (instead of iterating):
In [185]: np.nonzero(np.argsort(perms_arr.reshape(-1,4), axis=1)==0)
Out[185]:
(array([ 0, 1, 2, ..., 11703237, 11703238, 11703239]),
array([0, 0, 0, ..., 3, 3, 3]))
In [186]: np.allclose(_[1],locs)
Out[186]: True
Or going the other direction: - cf with Out[175]
In [187]: np.nonzero(np.argsort(perms_arr.reshape(-1,4), axis=0)==0)
Out[187]: (array([ 0, 195054, 578037, 769366]), array([0, 1, 2, 3]))
I am looking for the fastest way to obtain a list of the nonzero indices of a 2D array per row and per column. The following is a working piece of code:
preds = [matrix[:,v].nonzero()[0] for v in range(matrix.shape[1])]
descs = [matrix[v].nonzero()[0] for v in range(matrix.shape[0])]
Example input:
matrix = np.array([[0,0,0,0],[1,0,0,0],[1,1,0,0],[1,1,1,0]])
Example output
preds = [array([1, 2, 3]), array([2, 3]), array([3]), array([], dtype=int64)]
descs = [array([], dtype=int64), array([0]), array([0, 1]), array([0, 1, 2])]
(The lists are called preds and descs because they refer to the predecessors and descendants in a DAG when the matrix is interpreted as an adjacency matrix but this is not essential to the question.)
Timing example:
For timing purposes, the following matrix is a good representative:
test_matrix = np.zeros(shape=(4096,4096),dtype=np.float32)
for k in range(16):
test_matrix[256*(k+1):256*(k+2),256*k:256*(k+1)]=1
Background: In my code, these two lines take 75% of the time for a 4000x4000 matrix whereas the ensuing topological sort and DP algorithm take only the rest of the quarter. Roughly 5% of the values in the matrix are nonzero so a sparse-matrix solution may be applicable.
Thank you.
(On suggestion posted here as well: https://scicomp.stackexchange.com/questions/35242/fast-nonzero-indices-per-row-column-for-sparse-2d-numpy-array
There are also answers there to which I will provide timings in the comments. This link contains an accepted answer that is twice as fast.)
If you have enough motivation, Numba can do amazing things.
Here is a quick implementation of the logic you need.
Briefly, it computes the equivalent of np.nonzero() but it includes along the way the information to later dispatch the indices into the format you require.
The information is inspired by sparse.csr.indptr and sparse.csc.indptr.
import numpy as np
import numba as nb
#nb.jit
def cumsum(arr):
result = np.empty_like(arr)
cumsum = result[0] = arr[0]
for i in range(1, len(arr)):
cumsum += arr[i]
result[i] = cumsum
return result
#nb.jit
def count_nonzero(arr):
arr = arr.ravel()
n = 0
for x in arr:
if x != 0:
n += 1
return n
#nb.jit
def row_col_nonzero_nb(arr):
n, m = arr.shape
max_k = count_nonzero(arr)
indices = np.empty((2, max_k), dtype=np.uint32)
i_offset = np.zeros(n + 1, dtype=np.uint32)
j_offset = np.zeros(m + 1, dtype=np.uint32)
n, m = arr.shape
k = 0
for i in range(n):
for j in range(m):
if arr[i, j] != 0:
indices[:, k] = i, j
i_offset[i + 1] += 1
j_offset[j + 1] += 1
k += 1
return indices, cumsum(i_offset), cumsum(j_offset)
def row_col_idx_nonzero_nb(arr):
(ii, jj), jj_split, ii_split = row_col_nonzero_nb(arr)
ii_ = np.argsort(jj)
ii = ii[ii_]
return np.split(ii, ii_split[1:-1]), np.split(jj, jj_split[1:-1])
Compared to your approach (row_col_idx_sep() below), and a bunch of others, as per #hpaulj answer (row_col_idx_sparse_lil()) and #knl answer from scicomp.stackexchange.com (row_col_idx_sparse_coo()):
def row_col_idx_sep(arr):
return (
[arr[:, j].nonzero()[0] for j in range(arr.shape[1])],
[arr[i, :].nonzero()[0] for i in range(arr.shape[0])],)
def row_col_idx_zip(arr):
n, m = arr.shape
ii = [[] for _ in range(n)]
jj = [[] for _ in range(m)]
x, y = np.nonzero(arr)
for i, j in zip(x, y):
ii[i].append(j)
jj[j].append(i)
return jj, ii
import scipy as sp
import scipy.sparse
def row_col_idx_sparse_coo(arr):
coo_mat = sp.sparse.coo_matrix(arr)
csr_mat = coo_mat.tocsr()
csc_mat = coo_mat.tocsc()
return (
np.split(csc_mat.indices, csc_mat.indptr)[1:-1],
np.split(csr_mat.indices, csr_mat.indptr)[1:-1],)
def row_col_idx_sparse_lil(arr):
lil_mat = sp.sparse.lil_matrix(arr)
return lil_mat.T.rows, lil_mat.rows
For inputs generated using:
def gen_input(n, density=0.1, dtype=np.float32):
arr = np.zeros(shape=(n, n), dtype=dtype)
indices = tuple(np.random.randint(0, n, (2, int(n * n * density))).tolist())
arr[indices] = 1.0
return arr
One would get (your test_matrix had approximately 0.06 non-zero density):
m = gen_input(4096, density=0.06)
%timeit row_col_idx_sep(m)
# 1 loop, best of 3: 767 ms per loop
%timeit row_col_idx_zip(m)
# 1 loop, best of 3: 660 ms per loop
%timeit row_col_idx_sparse_coo(m)
# 1 loop, best of 3: 205 ms per loop
%timeit row_col_idx_sparse_lil(m)
# 1 loop, best of 3: 498 ms per loop
%timeit row_col_idx_nonzero_nb(m)
# 10 loops, best of 3: 130 ms per loop
Indicating this to be close to twice as fast as the fastest scipy.sparse-based approach.
In [182]: arr = np.array([[0,0,0,0],[1,0,0,0],[1,1,0,0],[1,1,1,0]])
The data is present in the whole-array nonzero, just not broken up into per row/column arrays:
In [183]: np.nonzero(arr)
Out[183]: (array([1, 2, 2, 3, 3, 3]), array([0, 0, 1, 0, 1, 2]))
In [184]: np.argwhere(arr)
Out[184]:
array([[1, 0],
[2, 0],
[2, 1],
[3, 0],
[3, 1],
[3, 2]])
It might be possible to break the array([1, 2, 2, 3, 3, 3]) into sublists, [1,2,3],[2,3],[3],[] based on the other array. But it may take some time to work out the logic for that, and there's no guarantee that it will be faster than your row/column iterations.
Logical operations can reduce the boolean array to column or row, giving the rows or columns where nonzero occurs, but again not ragged:
In [185]: arr!=0
Out[185]:
array([[False, False, False, False],
[ True, False, False, False],
[ True, True, False, False],
[ True, True, True, False]])
In [186]: (arr!=0).any(axis=0)
Out[186]: array([ True, True, True, False])
In [187]: np.nonzero((arr!=0).any(axis=0))
Out[187]: (array([0, 1, 2]),)
In [188]: np.nonzero((arr!=0).any(axis=1))
Out[188]: (array([1, 2, 3]),)
In [189]: arr
Out[189]:
array([[0, 0, 0, 0],
[1, 0, 0, 0],
[1, 1, 0, 0],
[1, 1, 1, 0]])
The scipy.sparse lil format does generate the data you want:
In [190]: sparse
Out[190]: <module 'scipy.sparse' from '/usr/local/lib/python3.6/dist-packages/scipy/sparse/__init__.py'>
In [191]: M = sparse.lil_matrix(arr)
In [192]: M
Out[192]:
<4x4 sparse matrix of type '<class 'numpy.longlong'>'
with 6 stored elements in List of Lists format>
In [193]: M.rows
Out[193]: array([list([]), list([0]), list([0, 1]), list([0, 1, 2])], dtype=object)
In [194]: M.T
Out[194]:
<4x4 sparse matrix of type '<class 'numpy.longlong'>'
with 6 stored elements in List of Lists format>
In [195]: M.T.rows
Out[195]: array([list([1, 2, 3]), list([2, 3]), list([3]), list([])], dtype=object)
But timing probably isn't any better than your row or column iteration.
I have one numpy array, where indices are stored in the shape of (n, 2). E.g.:
[[0, 1],
[2, 3],
[1, 2],
[4, 2]]
Then I do some processing and create an array in the shape of (m, 2), where n > m. E.g.:
[[2, 3]
[4, 2]]
Now I want to delete every row in the first array that can be found in the second array as well. So my wanted result is:
[[0, 1],
[1, 2]]
My current solution is as follows:
for row in second_array:
result = np.delete(first_array, np.where(np.all(first_array == second_array, axis=1)), axis=0)
However, this is quiet time consuming if the second is large. Does someone know a numpy only solution, which does not require a loop?
Here's one leveraging the fact that they are positive numbers using matrix-multiplication for dimensionality-reduction -
def setdiff_nd_positivenums(a,b):
s = np.maximum(a.max(0)+1,b.max(0)+1)
return a[~np.isin(a.dot(s),b.dot(s))]
Sample run -
In [82]: a
Out[82]:
array([[0, 1],
[2, 3],
[1, 2],
[4, 2]])
In [83]: b
Out[83]:
array([[2, 3],
[4, 2]])
In [85]: setdiff_nd_positivenums(a,b)
Out[85]:
array([[0, 1],
[1, 2]])
Also, it seems the second-array b is a subset of a. So, we can leverage that scenario to boost the performance even further using np.searchsorted, like so -
def setdiff_nd_positivenums_searchsorted(a,b):
s = np.maximum(a.max(0)+1,b.max(0)+1)
a1D,b1D = a.dot(s),b.dot(s)
b1Ds = np.sort(b1D)
return a[b1Ds[np.searchsorted(b1Ds,a1D)] != a1D]
Timings -
In [146]: np.random.seed(0)
...: a = np.random.randint(0,9,(1000000,2))
...: b = a[np.random.choice(len(a), 10000, replace=0)]
In [147]: %timeit setdiff_nd_positivenums(a,b)
...: %timeit setdiff_nd_positivenums_searchsorted(a,b)
10 loops, best of 3: 101 ms per loop
10 loops, best of 3: 70.9 ms per loop
For generic numbers, here's another using views -
# https://stackoverflow.com/a/45313353/ #Divakar
def view1D(a, b): # a, b are arrays
a = np.ascontiguousarray(a)
b = np.ascontiguousarray(b)
void_dt = np.dtype((np.void, a.dtype.itemsize * a.shape[1]))
return a.view(void_dt).ravel(), b.view(void_dt).ravel()
def setdiff_nd(a,b):
# a,b are the nD input arrays
A,B = view1D(a,b)
return a[~np.isin(A,B)]
Sample run -
In [94]: a
Out[94]:
array([[ 0, 1],
[-2, -3],
[ 1, 2],
[-4, -2]])
In [95]: b
Out[95]:
array([[-2, -3],
[ 4, 2]])
In [96]: setdiff_nd(a,b)
Out[96]:
array([[ 0, 1],
[ 1, 2],
[-4, -2]])
Timings -
In [158]: np.random.seed(0)
...: a = np.random.randint(0,9,(1000000,2))
...: b = a[np.random.choice(len(a), 10000, replace=0)]
In [159]: %timeit setdiff_nd(a,b)
1 loop, best of 3: 352 ms per loop
The numpy-indexed package (disclaimer: I am its author) was designed to perform operations of this type efficiently on nd-arrays.
import numpy_indexed as npi
# if the output should consist of unique values and there is no need to preserve ordering
result = npi.difference(first_array, second_array)
# otherwise:
result = first_array[~npi.in_(first_array, second_array)]
Here is a function that works with 2D arrays of integers with any shape, and accepting both positive and negative numbers:
import numpy as np
# Gets a boolean array of rows of a that are in b
def isin_rows(a, b):
a = np.asarray(a)
b = np.asarray(b)
# Subtract minimum value per column
min = np.minimum(a.min(0), b.min(0))
a = a - min
b = b - min
# Get maximum value per column
max = np.maximum(a.max(0), b.max(0))
# Compute multiplicative base for each column
base = np.roll(max, 1)
base[0] = 1
base = np.cumprod(max)
# Make flattened version of arrays
a_flat = (a * base).sum(1)
b_flat = (b * base).sum(1)
# Check elements of a in b
return np.isin(a_flat, b_flat)
# Test
a = np.array([[0, 1],
[2, 3],
[1, 2],
[4, 2]])
b = np.array([[2, 3],
[4, 2]])
a_in_b_mask = isin_rows(a, b)
a_not_in_b = a[~a_in_b_mask]
print(a_not_in_b)
# [[0 1]
# [1 2]]
EDIT: One possible optimization raises from considering the number of possible rows in b. If b has more rows than the possible number of combinations, then you may find its unique elements first so np.isin is faster:
import numpy as np
def isin_rows_opt(a, b):
a = np.asarray(a)
b = np.asarray(b)
min = np.minimum(a.min(0), b.min(0))
a = a - min
b = b - min
max = np.maximum(a.max(0), b.max(0))
base = np.roll(max, 1)
base[0] = 1
base = np.cumprod(max)
a_flat = (a * base).sum(1)
b_flat = (b * base).sum(1)
# Count number of possible different rows for b
num_possible_b = np.prod(b.max(0) - b.min(0) + 1)
if len(b_flat) > num_possible_b: # May tune this condition
b_flat = np.unique(b_flat)
return np.isin(a_flat, b_flat)
The condition len(b_flat) > num_possible_b should probably be tuned better so you only find for unique elements if it is really going to be worth it (maybe len(b_flat) > 2 * num_possible_b or len(b_flat) > num_possible_b + CONSTANT). It seems to give some improvement for big arrays with fewer values:
import numpy as np
# Test setup from #Divakar
np.random.seed(0)
a = np.random.randint(0, 9, (1000000, 2))
b = a[np.random.choice(len(a), 10000, replace=0)]
print(np.all(isin_rows(a, b) == isin_rows_opt(a, b)))
# True
%timeit isin_rows(a, b)
# 100 ms ± 425 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit isin_rows_opt(a, b)
# 81.2 ms ± 324 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
I have a data set in numpy with a x vector and a y vector. The y vectors is only two values +1 or -1 (or 0 or 1) because its a binary valued function. I know I can just loop over the data set and if I see a +1 to map it to 1 and if I see and -1 map it to 0 one by one. However, I was hoping that given the whole vector y = [N x 1] to map it in one step to a vector y = [N x 2] since can be quite large I wanted to do it as quickly as possible (I also didn't want to save the copy of the data set twice).
Is there a vectorized way to do this transformation quickly in python?
For the reference here is the looping code:
def transform_data_to_one_hot(X,Y):
N,D = Y.size
Y_new = np.zeros(N,D)
for i in range(N):
if y == -1:
Y_new[i] = np.array([1,0])
else:
Y_new[i] = np.array([0,1])
return Y_new
Lets do the parity function using Radamacher variables (i.e. +1,-1 instead of 0 and 1). In this case the parity function is just the product function:
>>> X = np.array([[-1,-1],[-1,1],[1,-1],[1,1]])
>>> X
array([[-1, -1],
[-1, 1],
[ 1, -1],
[ 1, 1]])
>>> Y = np.reshape(np.prod(X,axis=1),[4,1])
>>> Y
array([[ 1],
[-1],
[-1],
[ 1]])
the Y vector when is one hot should be:
>>> Y
array([[ 0,1],
[1,0],
[1,0],
[ 0,1]])
Here's one initialization based -
def initialization_based(y):
out = np.zeros((len(y),2),dtype=int)
out[np.arange(out.shape[0]), (y==1).astype(int)] = 1
return out
Sample run -
In [244]: y
Out[244]: array([ 1, -1, 1, 1, -1, 1, -1, 1])
In [245]: initialization_based(y)
Out[245]:
array([[0, 1],
[1, 0],
[0, 1],
[0, 1],
[1, 0],
[0, 1],
[1, 0],
[0, 1]])
Other ways to use initialization method -
def initialization_based_v2(y):
out = np.zeros((len(y),2),dtype=int)
out[np.arange(out.shape[0]), (y+1)//2] = 1
return out
def initialization_based_v3(y):
yc = y.copy()
yc[yc==-1] = 0
out = np.zeros((len(y),2),dtype=int)
out[np.arange(out.shape[0]), yc] = 1
return out
The two new additions only differ in the way we are setting up the column indices. For version 2, we have those computed with simply : (y+1)//2, while for the version 3 as : yc = y.copy(); yc[yc==-1] = 0.
Another one that gets pretty close to #Eric's one, but uses boolean array -
def initialization_based_v4(y):
out = np.empty((len(y),2),dtype=int)
mask = y == 1
out[:,0] = mask
out[:,1] = ~mask
return out
Runtime test -
In [320]: y = 2*np.random.randint(0,2,(1000000))-1
In [321]: %timeit sign_to_one_hot(y, dtype=int)
...: %timeit initialization_based(y)
...: %timeit initialization_based_v2(y)
...: %timeit initialization_based_v3(y)
...: %timeit initialization_based_v4(y)
...:
100 loops, best of 3: 3.16 ms per loop
100 loops, best of 3: 8.39 ms per loop
10 loops, best of 3: 27.2 ms per loop
100 loops, best of 3: 13.8 ms per loop
100 loops, best of 3: 3.11 ms per loop
In [322]: from sklearn.preprocessing import OneHotEncoder
In [323]: enc = OneHotEncoder(sparse=False)
In [324]: %timeit enc.fit_transform(np.where(y>=0, y, 0))
10 loops, best of 3: 77.3 ms per loop
A few simple observations to making this efficient:
Preallocate the result, rather than using concatenate
empty is faster than zeros if you're just going to overwrite those zeros
Use the out argument, to avoid temporaries
def sign_to_one_hot(x, dtype=np.float64):
out = np.empty(x.shape + (2,), dtype=dtype)
plus_one = out[...,0]
minus_one = out[...,1]
np.equal(x, 1, out=plus_one)
np.subtract(1, plus_one, out=minus_one)
return out
Choose your dtype carefully - casting because you chose the wrong one will incur a copy
You can also use sklearn.preprocessing.OneHotEncoder method.
NOTE: it doesn't accept negative numbers, so we have to replace them.
Demo:
from sklearn.preprocessing import OneHotEncoder
# per default it generates sparsed matrix - it might be very useful for huge data sets
enc = OneHotEncoder(sparse=False)
rslt = enc.fit_transform(np.where(Y>=0, Y, 0))
Result:
In [140]: rslt
Out[140]:
array([[ 0., 1.],
[ 1., 0.],
[ 1., 0.],
[ 0., 1.]])
Source array:
In [141]: Y
Out[141]:
array([[ 1],
[-1],
[-1],
[ 1]])
Pandas solution:
In [148]: pd.get_dummies(Y.ravel())
Out[148]:
-1 1
0 0 1
1 1 0
2 1 0
3 0 1
Looking for a fast vectorized function that returns the rolling number of consecutive non-zero values. The count should start over at 0 whenever encountering a zero. The result should have the same shape as the input array.
Given an array like this:
x = np.array([2.3, 1.2, 4.1 , 0.0, 0.0, 5.3, 0, 1.2, 3.1])
The function should return this:
array([1, 2, 3, 0, 0, 1, 0, 1, 2])
This post lists a vectorized approach which basically consists of two steps:
Initialize a zeros vector of the same size as input vector, x and set ones at places corresponding to non-zeros of x.
Next up, in that vector, we need to put minus of runlengths of each island right after the ending/stop positions for each "island". The intention is to use cumsum again later on, which would result in sequential numbers for the "islands" and zeros elsewhere.
Here's the implementation -
import numpy as np
#Append zeros at the start and end of input array, x
xa = np.hstack([[0],x,[0]])
# Get an array of ones and zeros, with ones for nonzeros of x and zeros elsewhere
xa1 =(xa!=0)+0
# Find consecutive differences on xa1
xadf = np.diff(xa1)
# Find start and stop+1 indices and thus the lengths of "islands" of non-zeros
starts = np.where(xadf==1)[0]
stops_p1 = np.where(xadf==-1)[0]
lens = stops_p1 - starts
# Mark indices where "minus ones" are to be put for applying cumsum
put_m1 = stops_p1[[stops_p1 < x.size]]
# Setup vector with ones for nonzero x's, "minus lens" at stops +1 & zeros elsewhere
vec = xa1[1:-1] # Note: this will change xa1, but it's okay as not needed anymore
vec[put_m1] = -lens[0:put_m1.size]
# Perform cumsum to get the desired output
out = vec.cumsum()
Sample run -
In [116]: x
Out[116]: array([ 0. , 2.3, 1.2, 4.1, 0. , 0. , 5.3, 0. , 1.2, 3.1, 0. ])
In [117]: out
Out[117]: array([0, 1, 2, 3, 0, 0, 1, 0, 1, 2, 0], dtype=int32)
Runtime tests -
Here's some runtimes tests comparing the proposed approach against the other itertools.groupby based approach -
In [21]: N = 1000000
...: x = np.random.rand(1,N)
...: x[x>0.5] = 0.0
...: x = x.ravel()
...:
In [19]: %timeit sumrunlen_vectorized(x)
10 loops, best of 3: 19.9 ms per loop
In [20]: %timeit sumrunlen_loopy(x)
1 loops, best of 3: 2.86 s per loop
You can use itertools.groupby and np.hstack :
>>> import numpy as np
>>> x = np.array([2.3, 1.2, 4.1 , 0.0, 0.0, 5.3, 0, 1.2, 3.1])
>>> from itertools import groupby
>>> np.hstack([[i if j!=0 else j for i,j in enumerate(g,1)] for _,g in groupby(x,key=lambda x: x!=0)])
array([ 1., 2., 3., 0., 0., 1., 0., 1., 2.])
We can group the array elements based on non-zero elements then use a list comprehension and enumerate to replace the non-zero sub-arrays with those index then flatten the list with np.hstack.
This sub-problem came up in Kick Start 2021 Round A for me. My solution:
def current_run_len(a):
a_ = np.hstack([0, a != 0, 0]) # first in starts and last in stops defined
d = np.diff(a_)
starts = np.where(d == 1)[0]
stops = np.where(d == -1)[0]
a_[stops + 1] = -(stops - starts) # +1 for behind-last
return a_[1:-1].cumsum()
In fact, the problem also required a version where you count down consecutive sequences. Thus here another version with an optional keyword argument which does the same for rev=False:
def current_run_len(a, rev=False):
a_ = np.hstack([0, a != 0, 0]) # first in starts and last in stops defined
d = np.diff(a_)
starts = np.where(d == 1)[0]
stops = np.where(d == -1)[0]
if rev:
a_[starts] = -(stops - starts)
cs = -a_.cumsum()[:-2]
else:
a_[stops + 1] = -(stops - starts) # +1 for behind-last
cs = a_.cumsum()[1:-1]
return cs
Results:
a = np.array([1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1])
print('a = ', a)
print('current_run_len(a) = ', current_run_len(a))
print('current_run_len(a, rev=True) = ', current_run_len(a, rev=True))
a = [1 1 1 1 0 0 0 1 1 0 1 0 0 0 1]
current_run_len(a) = [1 2 3 4 0 0 0 1 2 0 1 0 0 0 1]
current_run_len(a, rev=True) = [4 3 2 1 0 0 0 2 1 0 1 0 0 0 1]
For an array that consists of 0s and 1s only, you can simplify [0, a != 0, 0] to [0, a, 0]. But the version as-posted also works for arbitrary non-zero numbers.