I have an array of a size 2 x 2 and I want to change the size to 3 x 4.
A = [[1 2 ],[2 3]]
A_new = [[1 2 0 0],[2 3 0 0],[0 0 0 0]]
I tried 3 shape but it didn't and append can only append row, not column. I don't want to iterate through each row to add the column.
Is there any vectorized way to do this like that of in MATLAB: A(:,3:4) = 0; and A(3,:) = 0; this converted the A from 2 x 2 to 3 x 4. I was thinking is there a similar way in python?
In Python, if the input is a numpy array, you can use np.lib.pad to pad zeros around it -
import numpy as np
A = np.array([[1, 2 ],[2, 3]]) # Input
A_new = np.lib.pad(A, ((0,1),(0,2)), 'constant', constant_values=(0)) # Output
Sample run -
In [7]: A # Input: A numpy array
Out[7]:
array([[1, 2],
[2, 3]])
In [8]: np.lib.pad(A, ((0,1),(0,2)), 'constant', constant_values=(0))
Out[8]:
array([[1, 2, 0, 0],
[2, 3, 0, 0],
[0, 0, 0, 0]]) # Zero padded numpy array
If you don't want to do the math of how many zeros to pad, you can let the code do it for you given the output array size -
In [29]: A
Out[29]:
array([[1, 2],
[2, 3]])
In [30]: new_shape = (3,4)
In [31]: shape_diff = np.array(new_shape) - np.array(A.shape)
In [32]: np.lib.pad(A, ((0,shape_diff[0]),(0,shape_diff[1])),
'constant', constant_values=(0))
Out[32]:
array([[1, 2, 0, 0],
[2, 3, 0, 0],
[0, 0, 0, 0]])
Or, you can start off with a zero initialized output array and then put back those input elements from A -
In [38]: A
Out[38]:
array([[1, 2],
[2, 3]])
In [39]: A_new = np.zeros(new_shape,dtype = A.dtype)
In [40]: A_new[0:A.shape[0],0:A.shape[1]] = A
In [41]: A_new
Out[41]:
array([[1, 2, 0, 0],
[2, 3, 0, 0],
[0, 0, 0, 0]])
In MATLAB, you can use padarray -
A_new = padarray(A,[1 2],'post')
Sample run -
>> A
A =
1 2
2 3
>> A_new = padarray(A,[1 2],'post')
A_new =
1 2 0 0
2 3 0 0
0 0 0 0
Pure Python way achieve this:
row = 3
column = 4
A = [[1, 2],[2, 3]]
A_new = map(lambda x: x + ([0] * (column - len(x))), A + ([[0] * column] * (row - len(A))))
then A_new is [[1, 2, 0, 0], [2, 3, 0, 0], [0, 0, 0, 0]].
Good to know:
[x] * n will repeat x n-times
Lists can be concatenated using the + operator
Explanation:
map(function, list) will iterate each item in list pass it to function and replace that item with the return value
A + ([[0] * column] * (row - len(A))): A is being extended with the remaining "zeroed" lists
repeat the item in [0] by the column count
repeat that array by the remaining row count
([0] * (column - len(x))): for each row item (x) add an list with the remaining count of columns using
Q: Is there a vectorised way to ...
A: Yes, there is
A = np.ones( (2,2) ) # numpy create/assign 1-s
B = np.zeros( (4,5) ) # numpy create/assign 0-s "padding" mat
B[:A.shape[0],:A.shape[1]] += A[:,:] # numpy vectorised .ADD at a cost of ~270 us
B[:A.shape[0],:A.shape[1]] = A[:,:] # numpy vectorised .STO at a cost of ~180 us
B[:A.shape[0],:A.shape[1]] = A # numpy high-level .STO at a cost of ~450 us
B
Out[4]:
array([[ 1., 1., 0., 0., 0.],
[ 1., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]])
Q: Is it resources efficient to "extend" the A´s data-structure in a smart way "behind the curtain"?
A: No, fortunately not much. Try bigger, big or huge sizes to feel the resources-allocation/processing costs...
Numpy has genuine data-structure "behind-the-curtain" that allows lot of smart tricks alike strided (re-)mapping, view-based operations, fast vectorised/broadcast operations, however, changing the memory-layout "accross the strided smart-mapping" is rather expensive.
For this reason numpy has added since 1.7.0 an in-built layout/mapper-modifier .lib.pad() that is well-aware & optimised so as to handle the "behind-the-curtain" structures both smart & fast.
B = np.lib.pad( A,
( ( 0, 3 ), ( 0, 2) ),
'constant',
constant_values = ( 0, 0 )
) # .pad() at a cost of ~ 270 us
Related
I want to write the following code:
for i = 1:N
for j = 1:N
Ab(i,j) = (Ap(i)*Ap(j))^(0.5)*(1 - kij(i,j)) ;
end
end
However an error appears: "all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 1 dimension(s)"
ab=np.matrix((2, 2))
for i in range(0,nc):
for j in range(0, nc):
np.append(ab,((Ap[i]*Ap[j])**(0.5)*(1 - kij[i][j])))
There is a bit context missing, but if I guess correctly looking at Matlab part you can write something like this.
ab = np.zeros((2, 2))
for i in range(ab.shape[0]): # you do not have to put 0 and you can use size of array to limit iterations
for j in range(ab.shape[1]):
ab[i, j] = (Ap[i]*Ap[j])**(0.5)*(1 - kij[i][j])))
My assumptions
ab matrix meant to be 2x2 matrix, not 1x2 matrix with values [2, 2], this what np.matrix confusingly does (at least these were my expectations coming from Matlab). np.zeros - creates array with all zeros of size 2x2. Array and matrix are a bit different in numpy, by matrix is being slowly deprecated (more here https://numpy.org/doc/stable/reference/generated/numpy.matrix.html?highlight=matrix#numpy.matrix)
nc - is size of ab matrix
Why you had an error?
np.matrix((2, 2)) - creates 1x2 matrix with values 2 and 2 [[2, 2]]
(Ap[i]Ap[j])**(0.5)(1 - kij[i][j])) - this looks like a scalar value
np.append(ab, scalar_value) - tries to append scalar to matrix, but there is dimensions mismatch between ab and scalar value, which is stated in the error. Essentially, in order for this to work, they should be similar types of objects.
Examples
>>> np.zeros((2, 2))
array([[0., 0.],
[0., 0.]])
>>> np.matrix((2, 2))
matrix([[2, 2]])
>>> np.array((2, 2))
array([2, 2])
>> np.append(np.matrix((2, 2)), [[3, 3]], axis=0)
matrix([[2, 2],
[3, 3]])
>> np.append(np.zeros((2, 2)), [[3, 3]], axis=0)
array([[0., 0.],
[0., 0.],
[3., 3.]])
I have an array x = np.empty([2,3]). Assume I have two set of logical indices indx1 and indx2 and each one of them is paired with different columns, set1 and set2:
indx1 = [False,False,True]
set1 = np.array([[-1],[-1]])
indx2 = [True,True,False]
set2 = np.array([[1,2],[1,2]])
#need to join these two writing operations to a one.
x[:,indx1] = set1
x[:,indx2] = set2
>>> x
array([[1., 2., -1.],
[1., 2., -1.]])
How can I use indx1 and indx2 at the same time? For instance, I am looking for something like this (which does not work):
x[:,[indx1,indx2]] = [set1,set2]
In your case there are array, which have different dimensions (axis=0 if there the same dimension, and axis=1 if there is different dimensions)
For the easiest concatenate:
import numpy as np
set1 = np.array([[3],[3]])
set2 = np.array([[1,2],[1,2]])
indx1 = [False,False,True]
indx2 = [True,True,False]
sets = np.concatenate((set1, set2), axis=1)
np.concatenate((indx1, indx2), axis=0)
sets.sort()
output sets:
output index:
If you wan't to correlate sets with index - provide the proper output.
I did not manage to find an exact solution to the problem, but maybe (depending on how you generate the sets and indices), this will lead you in the right direction.
Let's suppose that, instead of the sparse definition of set1 and set2, you have dense arrays, each with the same size as x:
indx1 = [False,False,True]
indx2 = [True,True,False]
fullset1 = np.array([[0, 0, -1],
[0, 0, -1]])
fullset2 = np.array([[1, 2, 0],
[1, 2, 0]])
x = np.select( [indx1, indx2], [fullset1, fullset2] )
print(x)
#[[1 2 -1]
# [1 2 -1]]
It works with one command and can be easily extended if you have indx3, indx4, etc. However, I see several drawbacks. First, it creates a new variable that satisfies the conditions, which may not be your use case. Also, if there is an index that is set to false for all indx variables, the result might be unexpected:
indx1 = [False,False,True,False]
indx2 = [True,True,False,False]
fullset1 = np.array([[0, 0, -1, 0],
[0, 0, -1, 0]])
fullset2 = np.array([[1, 2, 0, 0],
[1, 2, 0, 0]])
x = np.select( [indx1, indx2], [fullset1, fullset2], default=None )
print(x)
#[[1 2 -1 None]
# [1 2 -1 None]]
In that case, my proposal (but I haven't tested the performances) would be to use an intermediate variable and np.where to fill the final variable:
x = np.array([[11, 12, 13, 14],
[15, 16, 17, 18]])
#....
intermediate_x = np.select( [indx1, indx2], [fullset1, fullset2], default=None )
indx_final = np.where(intermediate_x == None)
x[indx_final] = intermediate_x[indx_final]
print(x)
#[[ 1 2 -1 14]
# [ 1 2 -1 18]]
I read about how important it is to preallocate a numpy array. In my case I am, however, not sure how to do this. I want to preallocate an nxm matrix. That sounds simple enough
M = np.zeros((n,m))
However, what if my matrix is a matrix of matrices? So what if each of these nxm elements is actually of the form
np.array([[t], [x0,x1,x2], [y0,y1,y2]])
I know that in that case, M would have the shape (n,m,3).
As an example, later I want to have something like this
[[[[0], [0,1,2], [3,4,5]],
[[1], [10,11,12], [13,14,15]]],
[[[0], [100,101,102], [103,104,105]],
[[1], [110,111,112], [113,114,115]]]]
I tried simply doing
M = np.zeros((2,2,3))
but then
M[0,0,:] = np.array([[0], [0,1,2], [3,4,5]])
will give me an error
ValueError: setting an array element with a sequence.
Can I not preallocate this monster? Or should I approach this in a completely different way?
Thanks for your help
You have to make sure you preallocate the correct number of dimensions and elements along each dimension to use simple assignments to fill it.
For example you want to save 3 2x3 matrices:
number_of_matrices = 3
matrix_dim_1 = 2
matrix_dim_2 = 3
M = np.empty((number_of_matrices, matrix_dim_1, matrix_dim_2))
M[0] = np.array([[ 0, 1, 2], [ 3, 4, 5]])
M[1] = np.array([[100, 101, 102], [103, 104, 105]])
M[2] = np.array([[ 10, 11, 12], [ 13, 14, 15]])
M
#array([[[ 0., 1., 2.], # matrix 1
# [ 3., 4., 5.]],
#
# [[ 100., 101., 102.], # matrix 2
# [ 103., 104., 105.]],
#
# [[ 10., 11., 12.], # matrix 3
# [ 13., 14., 15.]]])
You're approach contains some problems. The array you want to save is not a valid ndimensional numpy array:
np.array([[0], [0,1,2], [3,4,5]])
# array([[0], [0, 1, 2], [3, 4, 5]], dtype=object)
# |----!!----|
# ^-------^----------^ 3 items in first dimension
# ^ 1 item in first item of 2nd dim
# ^--^--^ 3 items in second item of 2nd dim
# ^--^--^ 3 items in third item of 2nd dim
It just creates an 3 item array containing python list objects. You probably want to have an array containing numbers so you need to care about dimensions. Your np.array([[0], [0,1,2], [3,4,5]]) could be a 3x1 array or a 3x3 array, numpy doesn't know what to do in this case and saves it as objects (the array now has only 1 dimension!).
The other problem is that you want to set one element of the preallocated array with another array that contains more than one element. This is not possible (except you already have an object-array). You have two options here:
Fill as many elements in the preallocated array as are required by the array:
M[0, :, :] = np.array([[0,1,2], [3,4,5]])
# ^--------------------^--------^ First dimension has 2 items
# ^---------------^-^-^ Second dimension has 3 items
# ^------------------------^-^-^ dito
# if it's the first dimension you could also use M[0]
Create a object array and set the element (not recommended, you loose most of the advantages of numpy arrays):
M = np.empty((3), dtype='object')
M[0] = np.array([[0,1,2], [3,4,5]])
M[1] = np.array([[0,1,2], [3,4,5]])
M[2] = np.array([[0,1,2], [3,4,5]])
M
#array([array([[0, 1, 2],
# [3, 4, 5]]),
# array([[0, 1, 2],
# [3, 4, 5]]),
# array([[0, 1, 2],
# [3, 4, 5]])], dtype=object)
If you know you will only store values t, y, x for each point in n,m then it may be easier, and faster computationally, to have three numpy arrays.
So:
M_T = np.zeros((n,m))
M_Y = np.zeros((n,m))
M_X = np.zeros((n,m))
I believe you can now type 'normal' python operators to do array logic, such as:
MX = np.ones((n,m))
MY = np.ones((n,m))
MT = MX + MY
MT ** MT
_ * 7.5
By defining array-friendly functions (similarly to MATLAB) you will get a big speed increase for calculations.
Of course if you need more variables at each point then this may become unwieldy.
I have a numpy array, indices:
array([[ 0, 0, 0],
[ 0, 0, 0],
[ 2, 0, 2],
[ 0, 0, 0],
[ 2, 0, 2],
[95, 71, 95]])
I have another array of the same length called distances:
array([ 0.98713981, 1.04705992, 1.42340327, 74.0139111 ,
74.4285216 , 74.84623217])
All of the rows in indices have a match in the distances array. The problem is, there are duplicates in the indices array, and they have different values in the corresponding distances array. I would like to get the minimum distance for all triplets of indices, and discard the others. Therefore, with the inputs above, I want the output:
indicesOUT =
array([[ 0, 0, 0],
[ 2, 0, 2],
[95, 71, 95]])
distancesOUT=
array([ 0.98713981, 1.42340327, 74.84623217])
My current strategy is as follows:
import numpy as np
indicesOUT = []
distancesOUT = []
for i in range(6):
for j in range(6):
for k in range(6):
if len([s for s in indicesOUT if [i,j,k] == s]) == 0:
current = np.array([i, j, k])
ind = np.where((indices == current).all(-1) == True)[0]
currentDistances = distances[ind]
dist = np.amin(distances)
indicesOUT.append([i, j, k])
distancesOUT.append(dist)
The problem is, the actual arrays have about 4 million elements each, so this approach is way too slow. What is the most efficient way of doing this?
This is essentially a grouping operation, and NumPy is not well-optimized for it. Fortunately, the Pandas package has some very fast tools that can be adapted to this exact problem.
With your data above, we can do this:
import pandas as pd
def drop_duplicates(indices, distances):
data = pd.Series(distances)
grouped = data.groupby(list(indices.T)).min().reset_index()
return grouped.values[:, :3], grouped.values[:, 3]
And the output for your data is
array([[ 0., 0., 0.],
[ 2., 0., 2.],
[ 95., 71., 95.]]),
array([ 0.98713981, 1.42340327, 74.84623217])
My benchmark shows that for 4,000,000 elements, this should run in about a second:
indices = np.random.randint(0, 100, size=(4000000, 3))
distances = np.random.random(4000000)
%timeit drop_duplicates(indices, distances)
# 1 loops, best of 3: 1.15 s per loop
As written above, the input order of the indices will not necessarily be preserved; keeping the original order would require a bit more thought.
Given a 1D array of indices:
a = array([1, 0, 3])
I want to one-hot encode this as a 2D array:
b = array([[0,1,0,0], [1,0,0,0], [0,0,0,1]])
Create a zeroed array b with enough columns, i.e. a.max() + 1.
Then, for each row i, set the a[i]th column to 1.
>>> a = np.array([1, 0, 3])
>>> b = np.zeros((a.size, a.max() + 1))
>>> b[np.arange(a.size), a] = 1
>>> b
array([[ 0., 1., 0., 0.],
[ 1., 0., 0., 0.],
[ 0., 0., 0., 1.]])
>>> values = [1, 0, 3]
>>> n_values = np.max(values) + 1
>>> np.eye(n_values)[values]
array([[ 0., 1., 0., 0.],
[ 1., 0., 0., 0.],
[ 0., 0., 0., 1.]])
In case you are using keras, there is a built in utility for that:
from keras.utils.np_utils import to_categorical
categorical_labels = to_categorical(int_labels, num_classes=3)
And it does pretty much the same as #YXD's answer (see source-code).
Here is what I find useful:
def one_hot(a, num_classes):
return np.squeeze(np.eye(num_classes)[a.reshape(-1)])
Here num_classes stands for number of classes you have. So if you have a vector with shape of (10000,) this function transforms it to (10000,C). Note that a is zero-indexed, i.e. one_hot(np.array([0, 1]), 2) will give [[1, 0], [0, 1]].
Exactly what you wanted to have I believe.
PS: the source is Sequence models - deeplearning.ai
You can also use eye function of numpy:
numpy.eye(number of classes)[vector containing the labels]
You can use sklearn.preprocessing.LabelBinarizer:
Example:
import sklearn.preprocessing
a = [1,0,3]
label_binarizer = sklearn.preprocessing.LabelBinarizer()
label_binarizer.fit(range(max(a)+1))
b = label_binarizer.transform(a)
print('{0}'.format(b))
output:
[[0 1 0 0]
[1 0 0 0]
[0 0 0 1]]
Amongst other things, you may initialize sklearn.preprocessing.LabelBinarizer() so that the output of transform is sparse.
For 1-hot-encoding
one_hot_encode=pandas.get_dummies(array)
For Example
ENJOY CODING
You can use the following code for converting into a one-hot vector:
let x is the normal class vector having a single column with classes 0 to some number:
import numpy as np
np.eye(x.max()+1)[x]
if 0 is not a class; then remove +1.
Here is a function that converts a 1-D vector to a 2-D one-hot array.
#!/usr/bin/env python
import numpy as np
def convertToOneHot(vector, num_classes=None):
"""
Converts an input 1-D vector of integers into an output
2-D array of one-hot vectors, where an i'th input value
of j will set a '1' in the i'th row, j'th column of the
output array.
Example:
v = np.array((1, 0, 4))
one_hot_v = convertToOneHot(v)
print one_hot_v
[[0 1 0 0 0]
[1 0 0 0 0]
[0 0 0 0 1]]
"""
assert isinstance(vector, np.ndarray)
assert len(vector) > 0
if num_classes is None:
num_classes = np.max(vector)+1
else:
assert num_classes > 0
assert num_classes >= np.max(vector)
result = np.zeros(shape=(len(vector), num_classes))
result[np.arange(len(vector)), vector] = 1
return result.astype(int)
Below is some example usage:
>>> a = np.array([1, 0, 3])
>>> convertToOneHot(a)
array([[0, 1, 0, 0],
[1, 0, 0, 0],
[0, 0, 0, 1]])
>>> convertToOneHot(a, num_classes=10)
array([[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0]])
I think the short answer is no. For a more generic case in n dimensions, I came up with this:
# For 2-dimensional data, 4 values
a = np.array([[0, 1, 2], [3, 2, 1]])
z = np.zeros(list(a.shape) + [4])
z[list(np.indices(z.shape[:-1])) + [a]] = 1
I am wondering if there is a better solution -- I don't like that I have to create those lists in the last two lines. Anyway, I did some measurements with timeit and it seems that the numpy-based (indices/arange) and the iterative versions perform about the same.
Just to elaborate on the excellent answer from K3---rnc, here is a more generic version:
def onehottify(x, n=None, dtype=float):
"""1-hot encode x with the max value n (computed from data if n is None)."""
x = np.asarray(x)
n = np.max(x) + 1 if n is None else n
return np.eye(n, dtype=dtype)[x]
Also, here is a quick-and-dirty benchmark of this method and a method from the currently accepted answer by YXD (slightly changed, so that they offer the same API except that the latter works only with 1D ndarrays):
def onehottify_only_1d(x, n=None, dtype=float):
x = np.asarray(x)
n = np.max(x) + 1 if n is None else n
b = np.zeros((len(x), n), dtype=dtype)
b[np.arange(len(x)), x] = 1
return b
The latter method is ~35% faster (MacBook Pro 13 2015), but the former is more general:
>>> import numpy as np
>>> np.random.seed(42)
>>> a = np.random.randint(0, 9, size=(10_000,))
>>> a
array([6, 3, 7, ..., 5, 8, 6])
>>> %timeit onehottify(a, 10)
188 µs ± 5.03 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
>>> %timeit onehottify_only_1d(a, 10)
139 µs ± 2.78 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
If using tensorflow, there is one_hot():
import tensorflow as tf
import numpy as np
a = np.array([1, 0, 3])
depth = 4
b = tf.one_hot(a, depth)
# <tf.Tensor: shape=(3, 3), dtype=float32, numpy=
# array([[0., 1., 0.],
# [1., 0., 0.],
# [0., 0., 0.]], dtype=float32)>
def one_hot(n, class_num, col_wise=True):
a = np.eye(class_num)[n.reshape(-1)]
return a.T if col_wise else a
# Column for different hot
print(one_hot(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 9, 9, 9, 9, 8, 7]), 10))
# Row for different hot
print(one_hot(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 9, 9, 9, 9, 8, 7]), 10, col_wise=False))
I recently ran into a problem of same kind and found said solution which turned out to be only satisfying if you have numbers that go within a certain formation. For example if you want to one-hot encode following list:
all_good_list = [0,1,2,3,4]
go ahead, the posted solutions are already mentioned above. But what if considering this data:
problematic_list = [0,23,12,89,10]
If you do it with methods mentioned above, you will likely end up with 90 one-hot columns. This is because all answers include something like n = np.max(a)+1. I found a more generic solution that worked out for me and wanted to share with you:
import numpy as np
import sklearn
sklb = sklearn.preprocessing.LabelBinarizer()
a = np.asarray([1,2,44,3,2])
n = np.unique(a)
sklb.fit(n)
b = sklb.transform(a)
I hope someone encountered same restrictions on above solutions and this might come in handy
Here's a dimensionality-independent standalone solution.
This will convert any N-dimensional array arr of nonnegative integers to a one-hot N+1-dimensional array one_hot, where one_hot[i_1,...,i_N,c] = 1 means arr[i_1,...,i_N] = c. You can recover the input via np.argmax(one_hot, -1)
def expand_integer_grid(arr, n_classes):
"""
:param arr: N dim array of size i_1, ..., i_N
:param n_classes: C
:returns: one-hot N+1 dim array of size i_1, ..., i_N, C
:rtype: ndarray
"""
one_hot = np.zeros(arr.shape + (n_classes,))
axes_ranges = [range(arr.shape[i]) for i in range(arr.ndim)]
flat_grids = [_.ravel() for _ in np.meshgrid(*axes_ranges, indexing='ij')]
one_hot[flat_grids + [arr.ravel()]] = 1
assert((one_hot.sum(-1) == 1).all())
assert(np.allclose(np.argmax(one_hot, -1), arr))
return one_hot
Such type of encoding are usually part of numpy array. If you are using a numpy array like this :
a = np.array([1,0,3])
then there is very simple way to convert that to 1-hot encoding
out = (np.arange(4) == a[:,None]).astype(np.float32)
That's it.
p will be a 2d ndarray.
We want to know which value is the highest in a row, to put there 1 and everywhere else 0.
clean and easy solution:
max_elements_i = np.expand_dims(np.argmax(p, axis=1), axis=1)
one_hot = np.zeros(p.shape)
np.put_along_axis(one_hot, max_elements_i, 1, axis=1)
I find the easiest solution combines np.take and np.eye
def one_hot(x, depth: int):
return np.take(np.eye(depth), x, axis=0)
works for x of any shape.
Here is an example function that I wrote to do this based upon the answers above and my own use case:
def label_vector_to_one_hot_vector(vector, one_hot_size=10):
"""
Use to convert a column vector to a 'one-hot' matrix
Example:
vector: [[2], [0], [1]]
one_hot_size: 3
returns:
[[ 0., 0., 1.],
[ 1., 0., 0.],
[ 0., 1., 0.]]
Parameters:
vector (np.array): of size (n, 1) to be converted
one_hot_size (int) optional: size of 'one-hot' row vector
Returns:
np.array size (vector.size, one_hot_size): converted to a 'one-hot' matrix
"""
squeezed_vector = np.squeeze(vector, axis=-1)
one_hot = np.zeros((squeezed_vector.size, one_hot_size))
one_hot[np.arange(squeezed_vector.size), squeezed_vector] = 1
return one_hot
label_vector_to_one_hot_vector(vector=[[2], [0], [1]], one_hot_size=3)
I am adding for completion a simple function, using only numpy operators:
def probs_to_onehot(output_probabilities):
argmax_indices_array = np.argmax(output_probabilities, axis=1)
onehot_output_array = np.eye(np.unique(argmax_indices_array).shape[0])[argmax_indices_array.reshape(-1)]
return onehot_output_array
It takes as input a probability matrix: e.g.:
[[0.03038822 0.65810204 0.16549407 0.3797123 ]
...
[0.02771272 0.2760752 0.3280924 0.33458805]]
And it will return
[[0 1 0 0] ... [0 0 0 1]]
Use the following code. It works best.
def one_hot_encode(x):
"""
argument
- x: a list of labels
return
- one hot encoding matrix (number of labels, number of class)
"""
encoded = np.zeros((len(x), 10))
for idx, val in enumerate(x):
encoded[idx][val] = 1
return encoded
Found it here P.S You don't need to go into the link.
Using a Neuraxle pipeline step:
Set up your example
import numpy as np
a = np.array([1,0,3])
b = np.array([[0,1,0,0], [1,0,0,0], [0,0,0,1]])
Do the actual conversion
from neuraxle.steps.numpy import OneHotEncoder
encoder = OneHotEncoder(nb_columns=4)
b_pred = encoder.transform(a)
Assert it works
assert b_pred == b
Link to documentation: neuraxle.steps.numpy.OneHotEncoder