How to convert 2D numpy array to One Hot Encoding? - python

I was trying to apply one hot encoding for the following data. But I am confused about the output. Before applying one hot encoding the shape of data is (5,10) and after applying one hot encoding the shape of data is (5,20). But each letter would be encoded as a 4 element. So, after applying one hot encoding, the shape should be (5, 40) instead of (5,10). How can I solve this?
X = [[‘A’, ‘G’, ‘T’, ‘G’, ‘T’, ‘C’, ‘T’, ‘A’, ‘A’, ‘C’],
[‘A’, ‘G’, ‘T’, ‘G’, ‘T’, ‘C’, ‘T’, ‘A’, ‘A’, ‘C’],
[‘G’, ‘C’, ‘C’, ‘A’, ‘C’, ‘T’, ‘C’, ‘G’, ‘G’, ‘T’],
[‘G’, ‘C’, ‘C’, ‘A’, ‘C’, ‘T’, ‘C’, ‘G’, ‘G’, ‘T’],
[‘G’, ‘C’, ‘C’, ‘A’, ‘C’, ‘T’, ‘C’, ‘G’, ‘G’, ‘T’]]
Y = np.array(X)
print('Shape of numpy array', Y.shape)
# one hot encoding
onehot_encoder = OneHotEncoder(sparse=False)
onehot_encoded = onehot_encoder.fit_transform(Y)
print(onehot_encoded)
print('Shape of one hot encoding', onehot_encoded.shape)
Output:
Shape of numpy array (5, 10)
[[1. 0. 0. 1. 0. 1. 0. 1. 0. 1. 1. 0. 0. 1. 1. 0. 1. 0. 1. 0.]
[1. 0. 0. 1. 0. 1. 0. 1. 0. 1. 1. 0. 0. 1. 1. 0. 1. 0. 1. 0.]
[0. 1. 1. 0. 1. 0. 1. 0. 1. 0. 0. 1. 1. 0. 0. 1. 0. 1. 0. 1.]
[0. 1. 1. 0. 1. 0. 1. 0. 1. 0. 0. 1. 1. 0. 0. 1. 0. 1. 0. 1.]
[0. 1. 1. 0. 1. 0. 1. 0. 1. 0. 0. 1. 1. 0. 0. 1. 0. 1. 0. 1.]]
Shape of one hot encoding (5, 20)

You need to one-hot encode each column separately so you will get 4 new columns for each column in your ndarray:
X = np.array(X)
# Get unique classes.
classes = np.unique(X)
# Replace classes with itegers.
X = np.searchsorted(classes, X)
# Get an identity matrix.
eye = np.eye(classes.shape[0])
# Iterate over all columns
# and get one-hot encoding for each column.
X = np.concatenate([eye[i] for i in X.T], axis=1)
X.shape
# (5, 40)
Consider the following example:
[['A', 'G'],
['C', 'C'],
['T', 'A']]
You will get 8 (2 x 4) columns in your one-hot encoded ndarray:
Column 0 Column 1
A C G T A C G T
1 0 0 0 0 0 1 0
0 1 0 0 0 1 0 0
0 0 0 1 1 0 0 0

Related

Iterate over padded area in 2D array in python

Assume I have a 2D array in Python and I add some padding. How can I iterate over the new padded area only?
For example
1 2 3
4 5 6
7 8 9
Becomes
x x x x x x x
x x x x x x x
x x 1 2 3 x x
x x 4 5 6 x x
x x 7 8 9 x x
x x x x x x x
x x x x x x x
How can I loop over only the x's?
Not sure if I understand what you are trying to do, but if you are using numpy, you can use masks:
import numpy as np
arr = np.array(np.arange(1,10)).reshape(3,3)
# mask full of True's
mask = np.ones((7,7),dtype=bool)
# setting the interior of the mask as False
mask[2:-2,2:-2] = False
# using zero padding as example
pad_arr = np.zeros((7,7))
pad_arr[2:-2,2:-2] = arr
print(pad_arr)
# loop for elements of the padding, where mask == True
for value in pad_arr[mask]:
print(value)
Returns:
[[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 1. 2. 3. 0. 0.]
[0. 0. 4. 5. 6. 0. 0.]
[0. 0. 7. 8. 9. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]]
and 0.0 40 times (the padded values)

how to know precinct of an image prediction?

I want to know the predict precinct of one image
classes = model.predict(image)
print(classes)
Output:
0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
I want to show
[0.95, 0.20 , 0.30 , 0.0 , 0.25 .........]

XGBoost feature_importances_ parameters gives a 0 valued vector

I have experimented XGBClassifier() with a large dataset of shape [400000,93],
the data contains a lot of NaN values, so I have used imputation from sklearn package
imputer = Imputer()
imputed_x = imputer.fit_transform(data)
data = imputed_x
but the feature importance values look like this:
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Notice there is only a 1 and the rest are 0. For this reason, the resulting metrics are:
precision: 1.0
recall: 1.0
accuracy: 1.0
traning_accuracy: 1.0
Why the model can't fit the data.
Example code fragments
model_xboost = XGBClassifier(max_depth=5, n_estimators=100)
#train
model_xboost.fit(train_data, train_labels)
print(model_xboost.feature_importances_)
From the feature importance, there is only a 1 and the rest are 0. It looks as if you have included a column which is somewhat similar to the target column in the training data, thus resulting in that feature being perfectly correlated to the target!
For example I've come across a classification problem where I used the patient's background and medical parameters to predict whether or not the patient has cancer. There was 1 column called "data_source" which became the most significant. That's purely because patients who come from "XXX Cancer Hospital" will surely have cancer!
This is a good example of unintended data leakage.
You have one feature that is fully correlated to the target, with correlation value 1.0. That means you have trained your model with the target. You must remove it in training.

NetworkX: adjacency matrix does not correspond to graph

Say I have two options for generating the Adjacency Matrix of a network: nx.adjacency_matrix() and my own code. I wanted to test the correctness of my code and came up with some strange inequalities.
Example: a 3x3 lattice network.
import networkx as nx
N=3
G=nx.grid_2d_graph(N,N)
pos = dict( (n, n) for n in G.nodes() )
labels = dict( ((i,j), i + (N-1-j) * N ) for i, j in G.nodes() )
nx.relabel_nodes(G,labels,False)
inds=labels.keys()
vals=labels.values()
inds.sort()
vals.sort()
pos2=dict(zip(vals,inds))
plt.figure()
nx.draw_networkx(G, pos=pos2, with_labels=True, node_size = 200)
This is the visualization:
The adjacency matrix with nx.adjacency_matrix():
B=nx.adjacency_matrix(G)
B1=B.todense()
[[0 0 0 0 0 1 0 0 1]
[0 0 0 1 0 1 0 0 0]
[0 0 0 1 0 1 0 1 1]
[0 1 1 0 0 0 1 0 0]
[0 0 0 0 0 0 0 1 1]
[1 1 1 0 0 0 0 0 0]
[0 0 0 1 0 0 0 1 0]
[0 0 1 0 1 0 1 0 0]
[1 0 1 0 1 0 0 0 0]]
According to it, node 0 (entire 1st row and entire 1st column) is connected to nodes 5 and 8. But if you look at the image above this is wrong, as it connects to nodes 1 and 3.
Now my code (to be run in in the same script as the above):
import numpy
import math
P=3
def nodes_connected(i, j):
try:
if i in G.neighbors(j):
return 1
except nx.NetworkXError:
return False
A=numpy.zeros((P*P,P*P))
for i in range(0,P*P,1):
for j in range(0,P*P,1):
if i not in G.nodes():
A[i][:]=0
A[:][i]=0
elif i in G.nodes():
A[i][j]=nodes_connected(i,j)
A[j][i]=A[i][j]
for i in range(0,P*P,1):
for j in range(0,P*P,1):
if math.isnan(A[i][j]):
A[i][j]=0
print(A)
This yields:
[[ 0. 1. 0. 1. 0. 0. 0. 0. 0.]
[ 1. 0. 1. 0. 1. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 1. 0. 0. 0.]
[ 1. 0. 0. 0. 1. 0. 1. 0. 0.]
[ 0. 1. 0. 1. 0. 1. 0. 1. 0.]
[ 0. 0. 1. 0. 1. 0. 0. 0. 1.]
[ 0. 0. 0. 1. 0. 0. 0. 1. 0.]
[ 0. 0. 0. 0. 1. 0. 1. 0. 1.]
[ 0. 0. 0. 0. 0. 1. 0. 1. 0.]]
which says that node 0 is connected to nodes 1 and 3. Why does such difference exist? What is wrong in this situation?
Networkx doesn't know what order you want the nodes to be in.
Here is how to call it: adjacency_matrix(G, nodelist=None, weight='weight').
If you want a specific order, set nodelist to be a list in that order.
So for example adjacency_matrix(G, nodelist=range(9)) should get what you want.
Why is this? Well, because a graph can have just about anything as its nodes (anything hashable). One of your nodes could have been "parrot" or (1,2). So it stores the nodes as keys in a dict, rather than assuming it's the non-negative integers starting at 0. Dict keys have an arbitrary order.
A more general solution, if your nodes have some logical ordering as is the case if you generate a graph using G=nx.grid_2d_graph(3,3) (which returns tupples from (0,0) to (2,2), or in your example would be to use:
adjacency_matrix(G,nodelist=sorted(G.nodes()))
This sorts the returned list of nodes of G and passes it as the nodelist

transform an adjacency list into a sparse adjacency matrix using python

When using scipy, I was able to transform my data in the following format:
(row, col) (weight)
(0, 0) 5
(0, 47) 5
(0, 144) 5
(0, 253) 4
(0, 513) 5
...
(6039, 3107) 5
(6039, 3115) 3
(6039, 3130) 4
(6039, 3132) 2
How can I transform this into an array or sparse matrix with zeros for missing weight values as such? (based on the data above, column 1 to 46 should be filled with zeros, and so on...)
0 1 2 3 ... 47 48 49 50
1 [0 0 0 0 ... 5 0 0 0 0
2 2 0 1 0 ... 4 0 5 0 0
3 3 1 0 5 ... 1 0 0 4 2
4 0 0 0 4 ... 5 0 1 3 0
5 5 1 5 4 ... 0 0 3 0 1]
I know it is better in terms of memory to keep the data in the format above, but I need it as a matrix for experimentation.
scipy.sparse does it for you.
import numpy as np
from scipy.sparse import dok_matrix
your_data = [((2, 7), 1)]
XDIM, YDIM = 10, 10 # Replace with your values
dct = {}
for (row, col), weight in your_data:
dct[(row, col)] = weight
smat = dok_matrix((XDIM, YDIM))
smat.update(dct)
dense = smat.toarray()
print dense
'''
[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
'''

Categories