numpy vectorize use on (2,) array - python

I have a numpy array of (m, 2) and I want to transform it to shape of (m, 1) using a function below.
def func(x):
if x == [1., 1.]:
return 0.
if x == [-1., 1.] or x == [-1., -1.]:
return 1.
if x == [1., -1.]:
return 2.
I want this for applied on each (2,) vector inside the (m, 2) array resulting an (m, 1) array. I tried to use numpy.vectorize but it seems that the function gets applied in each element of a array (which makes sense in general purpose case). So I have failed to apply it.
My intension is not to use for loop. Can anyone help me with this? Thanks.

import numpy as np
def f(a, b):
return a + b
F = np.vectorize(f)
x = np.asarray([[1, 2], [3, 4], [5, 6]]).T
print(F(*x))
Output:
[3, 7, 11]

Related

Python Optimization: Using vector technique to find power of each matrix in an numpy array

3D numpy array A contains a series (in this example, I am choosing 3) of 2D numpy array D of shape 2 x 2. The D matrix is as follows:
D = np.array([[1,2],[3,4]])
A is initialized and assigned as below:
idx = np.arange(3)
A = np.zeros((3,2,2))
A[idx,:,:] = D # This gives A = [[[1,2],[3,4]],[[1,2],[3,4]],\
# [[1,2],[3,4]]]
# In mathematical notation: A = {D, D, D}
Now, essentially what I require after the execution of the codes is:
Mathematically, A = {D^0, D^1, D^2} = {D0, D1, D2}
where D0 = [[1,0],[0,1]], D1 = [[1,2],[3,4]], D2=[[7,10],[15,22]]
Is it possible to apply power to each matrix element in A without using a for-loop? I would be doing larger matrices with more in the series.
I had defined, n = np.array([0,1,2]) # corresponding to powers 0, 1 and 2 and tried
Result = np.power(A,n) but I do not get the desired output.
Is there are an efficient way to do it?
Full code:
D = np.array([[1,2],[3,4]])
idx = np.arange(3)
A = np.zeros((3,2,2))
A[idx,:,:] = D # This gives A = [[[1,2],[3,4]],[[1,2],[3,4]],\
# [[1,2],[3,4]]]
# In mathematical notation: A = {D, D, D}
n = np.array([0,1,2])
Result = np.power(A,n) # ------> Not the desired output.
A cumulative product exists in numpy, but not for matrices. Therefore, you need to make your own 'matcumprod' function. You can use np.dot for this, but np.matmul (or #) is specialized for matrix multiplication.
Since you state your powers always go from 0 to some_power, I suggest the following function:
def matcumprod(D, upto):
Res = np.empty((upto, *D.shape), dtype=A.dtype)
Res[0, :, :] = np.eye(D.shape[0])
Res[1, :, :] = D.copy()
for i in range(1,upto):
Res[i, :, :] = Res[i-1,:,:] # D
return Res
By the way, a loop often times outperforms a built-in numpy function if the latter uses a lot of memory, so don't fret over it if your powers stay within bounds...
Alright, i spent a lot of time on this problem but could not seem to find a vectorized solution in the way you'd like. So i would like to instead first propose a basic solution, and then perhaps an optimization if you require finding continuous powers.
The function you're looking for is called numpy.linalg.matrix_power
import numpy as np
D = np.matrix([[1,2],[3,4]])
idx = np.arange(3)
A = np.zeros((3,2,2))
A[idx,:,:] = D # This gives A = [[[1,2],[3,4]],[[1,2],[3,4]],\
# [[1,2],[3,4]]]
# In mathematical notation: A = {D, D, D}
np.zeros(A.shape)
n = np.array([0,1,2])
result = [np.linalg.matrix_power(D, i) for i in n]
np.array(result)
#Output:
array([[[ 1, 0],
[ 0, 1]],
[[ 1, 2],
[ 3, 4]],
[[ 7, 10],
[15, 22]]])
However, if you notice, you end up calculating multiple powers for the same base matrix. We could instead utilize the intermediate results and go from there, using numpy.linalg.multi_dot
def all_powers_arr_of_matrix(A):
result = np.zeros(A.shape)
result[0] = np.linalg.matrix_power(A[0], 0)
for i in range(1, A.shape[0]):
result[i] = np.linalg.multi_dot([result[i - 1], A[i]])
return result
result = all_powers_arr_of_matrix(A)
#Output:
array([[[ 1., 0.],
[ 0., 1.]],
[[ 1., 2.],
[ 3., 4.]],
[[ 7., 10.],
[15., 22.]]])
Also, we can avoid creating the matrix A entirely, saving some time.
def all_powers_matrix(D, *rangeargs): #end exclusive
''' Expects 2D matrix.
Use as all_powers_matrix(D, end) or
all_powers_matrix(D, start, end)
'''
if len(rangeargs) == 1:
start = 0
end = rangeargs[0]
elif len(rangeargs) == 2:
start = rangeargs[0]
end = rangeargs[1]
else:
print("incorrect args")
return None
result = np.zeros((end - start, *D.shape))
result[0] = np.linalg.matrix_power(A[0], start)
for i in range(start + 1, end):
result[i] = np.linalg.multi_dot([result[i - 1], D])
return result
return result
result = all_powers_matrix(D, 3)
#Output:
array([[[ 1., 0.],
[ 0., 1.]],
[[ 1., 2.],
[ 3., 4.]],
[[ 7., 10.],
[15., 22.]]])
Note that you'd need to add error handling if you decide to use these functions as-is.
To calculate power of matrix D, one way could be to find the eigenvalues and right eigenvectors of it with np.linalg.eig and then raise the power of the diagonal matrix as it is easier, then after some manipulation, you can use two np.einsum to calculate A
#get eigvalues and eigvectors
eigval, eigvect = np.linalg.eig(D)
# to check how it works, you can do:
print (np.dot(eigvect*eigval,np.linalg.inv(eigvect)))
#[[1. 2.]
# [3. 4.]]
# so you get back on D
#use power as ufunc of outer with n on the eigenvalues to get all the one you want
arrp = np.power.outer( eigval, n).T
#apply_along_axis to create the diagonal matrix along the last axis
diagp = np.apply_along_axis( np.diag, axis=-1, arr=arrp)
#finally use two np.einsum to calculate with the subscript to get what you want
A = np.einsum('lij,jk -> lik',
np.einsum('ij,kjl -> kil',eigvect,diagp), np.linalg.inv(eigvect)).round()
print (A)
print (A.shape)
#[[[ 1. 0.]
# [-0. 1.]]
#
# [[ 1. 2.]
# [ 3. 4.]]
#
# [[ 7. 10.]
# [15. 22.]]]
#
#(3, 2, 2)
I don't have a full solution, but there are some things I wanted to mention which are a bit too long for the comments.
You might first look into addition chain exponentiation if you are computing big powers of big matrices. This is basically asking how many matrix multiplications are required to compute A^k for a given k. For instance A^5 = A(A^2)^2 so you need to only three matrix multiplies: A^2 and (A^2)^2 and A(A^2)^2. This might be the simplest way to gain some efficiency, but you will probably still have to use explicit loops.
Your question is also related to the problem of computing Ax, A^2x, ... , A^kx for a given A and x. This is an active area of research right now (search "matrix powers kernel"), since computing such a sequence efficiently is useful for parallel/communication avoiding Krylov subspace methods. If you're looking for a very efficient solution to your problem it might be worth looking into some of the results about this.

efficient numpy array creation

Given x, I want to produce x, log(x) as a numpy array whereby x has shape s, the result has shape (*s, 2). What's the neatest way to do this? x may just be a float, in which case I want a result with shape (2,).
An ugly way to do this is:
import numpy as np
x = np.asarray(x)
result = np.empty((*x.shape, 2))
result[..., 0] = x
result[..., 1] = np.log(x)
It's important to separate aesthetics from performance. Sometimes ugly code is
fast. In fact, that's the case here. Although creating an empty array and then
assigning values to slices may not look beautiful, it is fast.
import numpy as np
import timeit
import itertools as IT
import pandas as pd
def using_empty(x):
x = np.asarray(x)
result = np.empty(x.shape + (2,))
result[..., 0] = x
result[..., 1] = np.log(x)
return result
def using_concat(x):
x = np.asarray(x)
return np.concatenate([x, np.log(x)], axis=-1).reshape(x.shape+(2,), order='F')
def using_stack(x):
x = np.asarray(x)
return np.stack([x, np.log(x)], axis=x.ndim)
def using_ufunc(x):
return np.array([x, np.log(x)])
using_ufunc = np.vectorize(using_ufunc, otypes=[np.ndarray])
tests = [np.arange(600),
np.arange(600).reshape(20,30),
np.arange(960).reshape(8,15,8)]
# check that all implementations return the same result
for x in tests:
assert np.allclose(using_empty(x), using_concat(x))
assert np.allclose(using_empty(x), using_stack(x))
timing = []
funcs = ['using_empty', 'using_concat', 'using_stack', 'using_ufunc']
for test, func in IT.product(tests, funcs):
timing.append(timeit.timeit(
'{}(test)'.format(func),
setup='from __main__ import test, {}'.format(func), number=1000))
timing = pd.DataFrame(np.array(timing).reshape(-1, len(funcs)), columns=funcs)
print(timing)
yields, the following timeit results on my machine:
using_empty using_concat using_stack using_ufunc
0 0.024754 0.025182 0.030244 2.414580
1 0.025766 0.027692 0.031970 2.408344
2 0.037502 0.039644 0.044032 3.907487
So using_empty is the fastest (of the options tested applied to tests).
Note that np.stack does exactly what you want, so
np.stack([x, np.log(x)], axis=x.ndim)
looks reasonably pretty, but it is also the slowest of the three options tested.
Note that along with being much slower, using_ufunc returns an array of object dtype:
In [236]: x = np.arange(6)
In [237]: using_ufunc(x)
Out[237]:
array([array([ 0., -inf]), array([ 1., 0.]),
array([ 2. , 0.69314718]),
array([ 3. , 1.09861229]),
array([ 4. , 1.38629436]), array([ 5. , 1.60943791])], dtype=object)
which is not the same as the desired result:
In [240]: using_empty(x)
Out[240]:
array([[ 0. , -inf],
[ 1. , 0. ],
[ 2. , 0.69314718],
[ 3. , 1.09861229],
[ 4. , 1.38629436],
[ 5. , 1.60943791]])
In [238]: using_ufunc(x).shape
Out[238]: (6,)
In [239]: using_empty(x).shape
Out[239]: (6, 2)

Avoid implicit conversion to matrix in numpy operations

Is there a way to globally avoid the matrix from appearing in any of the results of the numpy computations? For example currently if you have x as a numpy.ndarray and y as a scipy.sparse.csc_matrix, and you say x += y, x will become a matrix afterwards. Is there a way to prevent that from happening, i.e., keep x an ndarray, and more generally, keep using ndarray in all places where a matrix is produced?
I added the scipy tag, This is a scipy.sparse problem, not a np.matrix one.
In [250]: y=sparse.csr_matrix([[0,1],[1,0]])
In [251]: x=np.arange(2)
In [252]: y+x
Out[252]:
matrix([[0, 2],
[1, 1]])
the sparse + array => matrix
(as a side note, np.matrix is a subclass of np.ndarray. sparse.csr_matrix is not a subclass. It has many numpy like operations, but it implements them in its own code).
In [255]: x += y
In [256]: x
Out[256]:
matrix([[0, 2],
[1, 1]])
technically this shouldn't happen; in effect it is doing x = x+y assigning a new value to x, not just modifying x.
If I first turn y into a regular dense matrix, I get an error. Allowing the action would change a 1d array into a 2d one.
In [258]: x += y.todense()
...
ValueError: non-broadcastable output operand with shape (2,) doesn't match the broadcast shape (2,2)
Changing x to 2d allows the addition to proceed - without changing array to matrix:
In [259]: x=np.eye(2)
In [260]: x
Out[260]:
array([[ 1., 0.],
[ 0., 1.]])
In [261]: x += y.todense()
In [262]: x
Out[262]:
array([[ 1., 1.],
[ 1., 1.]])
In general, performing addition/subtraction with sparse matrices is tricky. They were designed for matrix multiplication. Multiplication doesn't change sparsity as much as addition. y+1 for example makes it dense.
Without digging into the details of how sparse addition is coded, I'd say - don't try this x+=... operation without first turning y into a dense version.
In [265]: x += y.A
In [266]: x
Out[266]:
array([[ 1., 2.],
[ 2., 1.]])
I can't think of a good reason not to do this.
(I should check the scipy github for a bug issue on this).
scipy/sparse/compressed.py has the csr addition code. x+y uses x.__add__(y) but sometimes that is flipped to y.__add__(x). x+=y uses x.__iadd__(y). So I may need to examine __iadd__ for ndarray as well.
But the basic addition for a sparse matrix is:
def __add__(self,other):
# First check if argument is a scalar
if isscalarlike(other):
if other == 0:
return self.copy()
else: # Now we would add this scalar to every element.
raise NotImplementedError('adding a nonzero scalar to a '
'sparse matrix is not supported')
elif isspmatrix(other):
if (other.shape != self.shape):
raise ValueError("inconsistent shapes")
return self._binopt(other,'_plus_')
elif isdense(other):
# Convert this matrix to a dense matrix and add them
return self.todense() + other
else:
return NotImplemented
So the y+x becomes y.todense() + x. And x+y uses the same thing.
Regardless of the += details, it is clear that adding a sparse to a dense (array or np.matrix) involves converting the sparse to dense. There's no code that iterates through the sparse values and adds those selectively to the dense array.
It's only if the arrays are both sparse that it performs a special sparse addition. y+y works, returning a sparse. y+=y fails with a NotImplmenentedError from sparse.base.__iadd__.
This is the best diagnostic sequence that I've come up, trying various ways of adding y to a (2,2) array.
In [348]: x=np.eye(2)
In [349]: x+y
Out[349]:
matrix([[ 1., 1.],
[ 1., 1.]])
In [350]: x+y.todense()
Out[350]:
matrix([[ 1., 1.],
[ 1., 1.]])
Addition produces a matrix, but values can be written to x without changing x class (or shape)
In [351]: x[:] = x+y
In [352]: x
Out[352]:
array([[ 1., 1.],
[ 1., 1.]])
+= with a dense matrix does the same:
In [353]: x += y.todense()
In [354]: x
Out[354]:
array([[ 1., 2.],
[ 2., 1.]])
but something in the +=sparse changes the class of x
In [355]: x += y
In [356]: x
Out[356]:
matrix([[ 1., 3.],
[ 3., 1.]])
Further testing and looking at id(x) and x.__array_interface__ it is clear that x += y replaces x. This is true even if x starts as np.matrix. So the sparse += is not an inplace operation. x += y.todense() is an inplace operation.
Yes, it's a bug; but https://github.com/scipy/scipy/issues/7826 says
I do not really see a way to change this.
An X += c * Y without todense follows.
Some inc( various array / matrix, various sparse )
have been tested, but for sure not all.
def inc( X, Y, c=1. ):
""" X += c * Y, X Y sparse or dense """
if (not hasattr( X, "indices" ) # dense += sparse
and hasattr( Y, "indices" )):
# inc an ndarray view, because ndarry += sparse -> matrix --
X = getattr( X, "A", X ).squeeze()
X[Y.indices] += c * Y.data
else:
X += c * Y # sparse + different sparse: SparseEfficiencyWarning
return X

Python Dynamic Array allocation, Matlab style

I'm trying to move a few Matlab libraries that I've built to the python environment. So far, the biggest issue I faced is the dynamic allocation of arrays based on index specification. For example, using Matlab, typing the following:
x = [1 2];
x(5) = 3;
would result in:
x = [ 1 2 0 0 3]
In other words, I didn't know before hand the size of (x), nor its content. The array must be defined on the fly, based on the indices that I'm providing.
In python, trying the following:
from numpy import *
x = array([1,2])
x[4] = 3
Would result in the following error: IndexError: index out of bounds. On workaround is incrementing the array in a loop and then assigned the desired value as :
from numpy import *
x = array([1,2])
idx = 4
for i in range(size(x),idx+1):
x = append(x,0)
x[idx] = 3
print x
It works, but it's not very convenient and it might become very cumbersome for n-dimensional arrays.I though about subclassing ndarray to achieve my goal, but I'm not sure if it would work. Does anybody knows of a better approach?
Thanks for the quick reply. I didn't know about the setitem method (I'm fairly new to Python). I simply overwritten the ndarray class as follows:
import numpy as np
class marray(np.ndarray):
def __setitem__(self, key, value):
# Array properties
nDim = np.ndim(self)
dims = list(np.shape(self))
# Requested Index
if type(key)==int: key=key,
nDim_rq = len(key)
dims_rq = list(key)
for i in range(nDim_rq): dims_rq[i]+=1
# Provided indices match current array number of dimensions
if nDim_rq==nDim:
# Define new dimensions
newdims = []
for iDim in range(nDim):
v = max([dims[iDim],dims_rq[iDim]])
newdims.append(v)
# Resize if necessary
if newdims != dims:
self.resize(newdims,refcheck=False)
return super(marray, self).__setitem__(key, value)
And it works like a charm! However, I need to modify the above code such that the setitem allow changing the number of dimensions following this request:
a = marray([0,0])
a[3,1,0] = 0
Unfortunately, when I try to use numpy functions such as
self = np.expand_dims(self,2)
the returned type is numpy.ndarray instead of main.marray. Any idea on how I could enforce that numpy functions output marray if a marray is provided as an input? I think it should be doable using array_wrap, but I could never find exactly how. Any help would be appreciated.
Took the liberty of updating my old answer from Dynamic list that automatically expands. Think this should do most of what you need/want
class matlab_list(list):
def __init__(self):
def zero():
while 1:
yield 0
self._num_gen = zero()
def __setitem__(self,index,value):
if isinstance(index, int):
self.expandfor(index)
return super(dynamic_list,self).__setitem__(index,value)
elif isinstance(index, slice):
if index.stop<index.start:
return super(dynamic_list,self).__setitem__(index,value)
else:
self.expandfor(index.stop if abs(index.stop)>abs(index.start) else index.start)
return super(dynamic_list,self).__setitem__(index,value)
def expandfor(self,index):
rng = []
if abs(index)>len(self)-1:
if index<0:
rng = xrange(abs(index)-len(self))
for i in rng:
self.insert(0,self_num_gen.next())
else:
rng = xrange(abs(index)-len(self)+1)
for i in rng:
self.append(self._num_gen.next())
# Usage
spec_list = matlab_list()
spec_list[5] = 14
This isn't quite what you want, but...
x = np.array([1, 2])
try:
x[index] = value
except IndexError:
oldsize = len(x) # will be trickier for multidimensional arrays; you'll need to use x.shape or something and take advantage of numpy's advanced slicing ability
x = np.resize(x, index+1) # Python uses C-style 0-based indices
x[oldsize:index] = 0 # You could also do x[oldsize:] = 0, but that would mean you'd be assigning to the final position twice.
x[index] = value
>>> x = np.array([1, 2])
>>> x = np.resize(x, 5)
>>> x[2:5] = 0
>>> x[4] = 3
>>> x
array([1, 2, 0, 0, 3])
Due to how numpy stores the data linearly under the hood (though whether it stores as row-major or column-major can be specified when creating arrays), multidimensional arrays are pretty tricky here.
>>> x = np.array([[1, 2, 3], [4, 5, 6]])
>>> np.resize(x, (6, 4))
array([[1, 2, 3, 4],
[5, 6, 1, 2],
[3, 4, 5, 6],
[1, 2, 3, 4],
[5, 6, 1, 2],
[3, 4, 5, 6]])
You'd need to do this or something similar:
>>> y = np.zeros((6, 4))
>>> y[:x.shape[0], :x.shape[1]] = x
>>> y
array([[ 1., 2., 3., 0.],
[ 4., 5., 6., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]])
A python dict will work well as a sparse array. The main issue is the syntax for initializing the sparse array will not be as pretty:
listarray = [100,200,300]
dictarray = {0:100, 1:200, 2:300}
but after that the syntax for inserting or retrieving elements is the same
dictarray[5] = 2345

Better way to shuffle two numpy arrays in unison

I have two numpy arrays of different shapes, but with the same length (leading dimension). I want to shuffle each of them, such that corresponding elements continue to correspond -- i.e. shuffle them in unison with respect to their leading indices.
This code works, and illustrates my goals:
def shuffle_in_unison(a, b):
assert len(a) == len(b)
shuffled_a = numpy.empty(a.shape, dtype=a.dtype)
shuffled_b = numpy.empty(b.shape, dtype=b.dtype)
permutation = numpy.random.permutation(len(a))
for old_index, new_index in enumerate(permutation):
shuffled_a[new_index] = a[old_index]
shuffled_b[new_index] = b[old_index]
return shuffled_a, shuffled_b
For example:
>>> a = numpy.asarray([[1, 1], [2, 2], [3, 3]])
>>> b = numpy.asarray([1, 2, 3])
>>> shuffle_in_unison(a, b)
(array([[2, 2],
[1, 1],
[3, 3]]), array([2, 1, 3]))
However, this feels clunky, inefficient, and slow, and it requires making a copy of the arrays -- I'd rather shuffle them in-place, since they'll be quite large.
Is there a better way to go about this? Faster execution and lower memory usage are my primary goals, but elegant code would be nice, too.
One other thought I had was this:
def shuffle_in_unison_scary(a, b):
rng_state = numpy.random.get_state()
numpy.random.shuffle(a)
numpy.random.set_state(rng_state)
numpy.random.shuffle(b)
This works...but it's a little scary, as I see little guarantee it'll continue to work -- it doesn't look like the sort of thing that's guaranteed to survive across numpy version, for example.
Your can use NumPy's array indexing:
def unison_shuffled_copies(a, b):
assert len(a) == len(b)
p = numpy.random.permutation(len(a))
return a[p], b[p]
This will result in creation of separate unison-shuffled arrays.
X = np.array([[1., 0.], [2., 1.], [0., 0.]])
y = np.array([0, 1, 2])
from sklearn.utils import shuffle
X, y = shuffle(X, y, random_state=0)
To learn more, see http://scikit-learn.org/stable/modules/generated/sklearn.utils.shuffle.html
Your "scary" solution does not appear scary to me. Calling shuffle() for two sequences of the same length results in the same number of calls to the random number generator, and these are the only "random" elements in the shuffle algorithm. By resetting the state, you ensure that the calls to the random number generator will give the same results in the second call to shuffle(), so the whole algorithm will generate the same permutation.
If you don't like this, a different solution would be to store your data in one array instead of two right from the beginning, and create two views into this single array simulating the two arrays you have now. You can use the single array for shuffling and the views for all other purposes.
Example: Let's assume the arrays a and b look like this:
a = numpy.array([[[ 0., 1., 2.],
[ 3., 4., 5.]],
[[ 6., 7., 8.],
[ 9., 10., 11.]],
[[ 12., 13., 14.],
[ 15., 16., 17.]]])
b = numpy.array([[ 0., 1.],
[ 2., 3.],
[ 4., 5.]])
We can now construct a single array containing all the data:
c = numpy.c_[a.reshape(len(a), -1), b.reshape(len(b), -1)]
# array([[ 0., 1., 2., 3., 4., 5., 0., 1.],
# [ 6., 7., 8., 9., 10., 11., 2., 3.],
# [ 12., 13., 14., 15., 16., 17., 4., 5.]])
Now we create views simulating the original a and b:
a2 = c[:, :a.size//len(a)].reshape(a.shape)
b2 = c[:, a.size//len(a):].reshape(b.shape)
The data of a2 and b2 is shared with c. To shuffle both arrays simultaneously, use numpy.random.shuffle(c).
In production code, you would of course try to avoid creating the original a and b at all and right away create c, a2 and b2.
This solution could be adapted to the case that a and b have different dtypes.
Very simple solution:
randomize = np.arange(len(x))
np.random.shuffle(randomize)
x = x[randomize]
y = y[randomize]
the two arrays x,y are now both randomly shuffled in the same way
James wrote in 2015 an sklearn solution which is helpful. But he added a random state variable, which is not needed. In the below code, the random state from numpy is automatically assumed.
X = np.array([[1., 0.], [2., 1.], [0., 0.]])
y = np.array([0, 1, 2])
from sklearn.utils import shuffle
X, y = shuffle(X, y)
from np.random import permutation
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data #numpy array
y = iris.target #numpy array
# Data is currently unshuffled; we should shuffle
# each X[i] with its corresponding y[i]
perm = permutation(len(X))
X = X[perm]
y = y[perm]
Shuffle any number of arrays together, in-place, using only NumPy.
import numpy as np
def shuffle_arrays(arrays, set_seed=-1):
"""Shuffles arrays in-place, in the same order, along axis=0
Parameters:
-----------
arrays : List of NumPy arrays.
set_seed : Seed value if int >= 0, else seed is random.
"""
assert all(len(arr) == len(arrays[0]) for arr in arrays)
seed = np.random.randint(0, 2**(32 - 1) - 1) if set_seed < 0 else set_seed
for arr in arrays:
rstate = np.random.RandomState(seed)
rstate.shuffle(arr)
And can be used like this
a = np.array([1, 2, 3, 4, 5])
b = np.array([10,20,30,40,50])
c = np.array([[1,10,11], [2,20,22], [3,30,33], [4,40,44], [5,50,55]])
shuffle_arrays([a, b, c])
A few things to note:
The assert ensures that all input arrays have the same length along
their first dimension.
Arrays shuffled in-place by their first dimension - nothing returned.
Random seed within positive int32 range.
If a repeatable shuffle is needed, seed value can be set.
After the shuffle, the data can be split using np.split or referenced using slices - depending on the application.
you can make an array like:
s = np.arange(0, len(a), 1)
then shuffle it:
np.random.shuffle(s)
now use this s as argument of your arrays. same shuffled arguments return same shuffled vectors.
x_data = x_data[s]
x_label = x_label[s]
There is a well-known function that can handle this:
from sklearn.model_selection import train_test_split
X, _, Y, _ = train_test_split(X,Y, test_size=0.0)
Just setting test_size to 0 will avoid splitting and give you shuffled data.
Though it is usually used to split train and test data, it does shuffle them too.
From documentation
Split arrays or matrices into random train and test subsets
Quick utility that wraps input validation and
next(ShuffleSplit().split(X, y)) and application to input data into a
single call for splitting (and optionally subsampling) data in a
oneliner.
This seems like a very simple solution:
import numpy as np
def shuffle_in_unison(a,b):
assert len(a)==len(b)
c = np.arange(len(a))
np.random.shuffle(c)
return a[c],b[c]
a = np.asarray([[1, 1], [2, 2], [3, 3]])
b = np.asarray([11, 22, 33])
shuffle_in_unison(a,b)
Out[94]:
(array([[3, 3],
[2, 2],
[1, 1]]),
array([33, 22, 11]))
One way in which in-place shuffling can be done for connected lists is using a seed (it could be random) and using numpy.random.shuffle to do the shuffling.
# Set seed to a random number if you want the shuffling to be non-deterministic.
def shuffle(a, b, seed):
np.random.seed(seed)
np.random.shuffle(a)
np.random.seed(seed)
np.random.shuffle(b)
That's it. This will shuffle both a and b in the exact same way. This is also done in-place which is always a plus.
EDIT, don't use np.random.seed() use np.random.RandomState instead
def shuffle(a, b, seed):
rand_state = np.random.RandomState(seed)
rand_state.shuffle(a)
rand_state.seed(seed)
rand_state.shuffle(b)
When calling it just pass in any seed to feed the random state:
a = [1,2,3,4]
b = [11, 22, 33, 44]
shuffle(a, b, 12345)
Output:
>>> a
[1, 4, 2, 3]
>>> b
[11, 44, 22, 33]
Edit: Fixed code to re-seed the random state
Say we have two arrays: a and b.
a = np.array([[1,2,3],[4,5,6],[7,8,9]])
b = np.array([[9,1,1],[6,6,6],[4,2,0]])
We can first obtain row indices by permutating first dimension
indices = np.random.permutation(a.shape[0])
[1 2 0]
Then use advanced indexing.
Here we are using the same indices to shuffle both arrays in unison.
a_shuffled = a[indices[:,np.newaxis], np.arange(a.shape[1])]
b_shuffled = b[indices[:,np.newaxis], np.arange(b.shape[1])]
This is equivalent to
np.take(a, indices, axis=0)
[[4 5 6]
[7 8 9]
[1 2 3]]
np.take(b, indices, axis=0)
[[6 6 6]
[4 2 0]
[9 1 1]]
If you want to avoid copying arrays, then I would suggest that instead of generating a permutation list, you go through every element in the array, and randomly swap it to another position in the array
for old_index in len(a):
new_index = numpy.random.randint(old_index+1)
a[old_index], a[new_index] = a[new_index], a[old_index]
b[old_index], b[new_index] = b[new_index], b[old_index]
This implements the Knuth-Fisher-Yates shuffle algorithm.
Shortest and easiest way in my opinion, use seed:
random.seed(seed)
random.shuffle(x_data)
# reset the same seed to get the identical random sequence and shuffle the y
random.seed(seed)
random.shuffle(y_data)
most solutions above work, however if you have column vectors you have to transpose them first. here is an example
def shuffle(self) -> None:
"""
Shuffles X and Y
"""
x = self.X.T
y = self.Y.T
p = np.random.permutation(len(x))
self.X = x[p].T
self.Y = y[p].T
With an example, this is what I'm doing:
combo = []
for i in range(60000):
combo.append((images[i], labels[i]))
shuffle(combo)
im = []
lab = []
for c in combo:
im.append(c[0])
lab.append(c[1])
images = np.asarray(im)
labels = np.asarray(lab)
I extended python's random.shuffle() to take a second arg:
def shuffle_together(x, y):
assert len(x) == len(y)
for i in reversed(xrange(1, len(x))):
# pick an element in x[:i+1] with which to exchange x[i]
j = int(random.random() * (i+1))
x[i], x[j] = x[j], x[i]
y[i], y[j] = y[j], y[i]
That way I can be sure that the shuffling happens in-place, and the function is not all too long or complicated.
Just use numpy...
First merge the two input arrays 1D array is labels(y) and 2D array is data(x) and shuffle them with NumPy shuffle method. Finally split them and return.
import numpy as np
def shuffle_2d(a, b):
rows= a.shape[0]
if b.shape != (rows,1):
b = b.reshape((rows,1))
S = np.hstack((b,a))
np.random.shuffle(S)
b, a = S[:,0], S[:,1:]
return a,b
features, samples = 2, 5
x, y = np.random.random((samples, features)), np.arange(samples)
x, y = shuffle_2d(train, test)

Categories