How to create upper triangular matrix in Pytorch? - python

Simple question, but is there a native way to create an upper triangular matrix from an existing matrix in Pytorch? I was thinking of using a mask, but even that requires creating the upper triangular matrix.

import torch
upper_tri = torch.ones(rol, col).triu()
Eg:
>> mat = torch.ones(3, 3).triu()
>> print(mat)
tensor([[1., 1., 1.],
[0., 1., 1.],
[0., 0., 1.]])

import torch
l = torch.tril(torch.ones(row, column))
This will return the lower triangular part of the matrix of size (row, column).

Related

Calculating Confusion Matrix by Using the Array of Arrays

I am using transformers and datasets libraries to train an multi-class nlp model for real specific dataset and I need to have an idea how my model performs for each label. So, I'd like to calculate the confusion matrix. I have 4 labels. My result.prediction looks like
array([[ -6.906 , -8.11 , -10.29 , 6.242 ],
[ -4.51 , 3.705 , -9.76 , -7.49 ],
[ -6.734 , 3.36 , -10.27 , -6.883 ],
...,
[ 8.41 , -9.43 , -9.45 , -8.6 ],
[ 1.3125, -3.094 , -11.016 , -9.31 ],
[ -7.152 , -8.5 , -9.13 , 6.766 ]], dtype=float16)
In here when predicted value is positive then model predicts 1, else model predicts 0. Next my result.label_ids looks like
array([[0., 0., 0., 1.],
[1., 0., 0., 0.],
[0., 0., 0., 1.],
...,
[1., 0., 0., 0.],
[1., 0., 0., 0.],
[0., 0., 0., 1.]], dtype=float32)
As you can see model return an array of 4, and give 0 values to false labels and 1 to true values.
In general, I've been using the following function to calculate confusion matrix, but in this case it didn't work since this function is for 1 dimensional arrays.
import numpy as np
def compute_confusion_matrix(labels, true, pred):
K = len(labels) # Number of classes
result = np.zeros((K, K))
for i in range(labels):
result[true[i]][pred[i]] += 1
return result
If possible I'd like to modify this function suitable for my above case. At least I would like to understand how can I implement confusion matrix for results that in the form multi dimensional arrays.
A possibility could be reversing the encoding to the format required by compute_confusion_matrix and, in this way, it is still possible to use your function!
To convert the predictions it's possible to do:
pred = list(np.where(result.label_ids == 1.)[1])
where np.where(result.label_ids == 1.)[1] is a numpy 1-dimensional array containing the indexes of the 1.s in each row of result.label_ids.
So pred will look like this according to your result.label_ids:
[3, 0, 3, ..., 0, 0, 3]
so it should have the same format of the original true (if also true is one-hot encoded the same strategy could be used to convert it) and can be used as input of your function for computing the confusion matrix.
First of all I would like to thank Nicola Fanelli for the idea.
The function I gave above as well as the sklearn.metrics.confusion_matrix() both need to be provided a list of predicted and true values. After my prediction step, I try to retrieve my true and predicted values in order to calculate a confusion matrix. The results I was getting are in the following form
array([[0., 0., 0., 1.],
[1., 0., 0., 0.],
[0., 0., 0., 1.],
...,
[1., 0., 0., 0.],
[1., 0., 0., 0.],
[0., 0., 0., 1.]], dtype=float32)
Here the idea is to retrieve the positional index of the value 1. When I tried the approach suggested by Nicola Fanelli , the resulting sizes were lower then the initial ones and they weren't matching. Therefore, confusion matrix cannot be calculated. To be honest I couldn't find the reason behind it, but I'll investigate that more later.
So, I use a different technique to implement the same idea. I used np.argmax() and append these positions to a new list. Here is the code sample for true values
true = []
for i in range(len(result.label_ids)):
n = np.array(result.label_ids[i])
true.append(np.argmax(n))
This way I got the results in the desired format without my sizes are being changed.
Even though this is a working solution for my problem, I am still open to more elegant ways to approach this problem.

How does numpy subtraction of vector work?

How does the following operation give three separate vectors inside an array? I don't understand how does it calculate the operation. Thanks in advance!!
import numpy as np
ligandPos=np.array([0.,1,2])
ionPos=np.array([0,0,0])
print(np.array([O - ionPos for O in ligandPos]))
array([[0., 0., 0.],
[1., 1., 1.],
[2., 2., 2.]])
We can substitute in the values of ligandPos to see that this is equivalent to
np.array([0 - ionPos, 1 - ionPos, 2 - ionPos])
0-ionPos is of course a vector [0,0,0]
1-ionPos is [1,1,1] and
2-ionPos is [2,2,2]
All of these are put together to make a 2D array

scipy.linalg.norm different from sklearn.preprocessing.normalize?

from numpy.random import rand
from sklearn.preprocessing import normalize
from scipy.sparse import csr_matrix
from scipy.linalg import norm
w = (rand(1,10)<0.25)*rand(1,10)
x = (rand(1,10)<0.25)*rand(1,10)
w_csr = csr_matrix(w)
x_csr = csr_matrix(x)
(normalize(w_csr,axis=1,copy=False,norm='l2')*normalize(x_csr,axis=1,copy=False,norm='l2')).todense()
norm(w,ord='fro')*norm(x,ord='fro')
I am working with scipy csr_matrix and would like to normalize two matrices using the frobenius norm and get their product. But norm from scipy.linalg and normalize from sklearn.preprocessing seem to evaluate the matrices differently. Since technically in the above two cases I am calculating the same frobenius norm shouldn't the two expressions evaluate to the same thing? But I get the following answer:
matrix([[ 0.962341]])
0.4431811178371029
for sklearn.preprocessing and scipy.linalg.norm respectively. I am really interested to know what I am doing wrong.
sklearn.prepocessing.normalize divides each row by its norm. It returns a matrix with the same shape as its input. scipy.linalg.norm returns the norm of the matrix. So your calculations are not equivalent.
Note that your code is not correct as it is written. This line
(normalize(w_csr,axis=1,copy=False,norm='l2')*normalize(x_csr,axis=1,copy=False,norm='l2')).todense()
raises ValueError: dimension mismatch. The two calls to normalize both return matrices with shapes (1, 10), so their dimensions are not compatible for a matrix product. What did you do to get matrix([[ 0.962341]])?
Here's a simple function to compute the Frobenius norm of a sparse (e.g. CSR or CSC) matrix:
def spnorm(a):
return np.sqrt(((a.data**2).sum()))
For example,
In [182]: b_csr
Out[182]:
<3x5 sparse matrix of type '<type 'numpy.float64'>'
with 5 stored elements in Compressed Sparse Row format>
In [183]: b_csr.A
Out[183]:
array([[ 1., 0., 0., 0., 0.],
[ 0., 2., 0., 4., 0.],
[ 0., 0., 0., 2., 1.]])
In [184]: spnorm(b_csr)
Out[184]: 5.0990195135927845
In [185]: norm(b_csr.A)
Out[185]: 5.0990195135927845

Combining 2-d arrays to form a 3-d array

I'm defining a function which will return a 3-d grid. During it, I use a function defined already that returns a 2-d array. I want to join these 2-d arrarys to form the 3-d during an iteration but I've looked at functions like meshgrid(), dstack(), concatenate() but can't seem to get any of them to fit right into the code.
The program models the spread of waves from a point source on the 2-d array, and the 3-d array shows how the displacement of the medium changes over the course of a wavelength.
def make_wave_snapshot(size,wavelength,phase):
waves_array = np.zeros((size,size),np.float)
if size%2==0:
for y in range(size):
for x in range(size):
r = math.hypot((size/2 - x - 0.5),(size/2 - y - 0.5))
d = np.sin((2*math.pi*r/wavelength)-phase)/np.sqrt(r)
waves_array[y,x] = d
dp.display_2d_array(waves_array) #This is in another module altogether
return waves_array #Displays array showing values
else:
return 'Please use integer of size.'
def make_wave_sequence(size,wavelength,nsteps):
waves_sequence = np.zeros((nsteps,size,size),np.float)
if nsteps%1==0:
for z in range(nsteps):
make_wave_snapshot(size,wavelength,(2*math.pi*z/nsteps))
waves_sequence = ???
return waves_sequence #Displays array showing values
else:
return 'Please use positive integer for number of steps'
The issue is turning the 'wave_array's into a 'wave_sequence'. Generous commenting would be very appreciated if you write any code. Many thanks!
If I understand correctly you have a three dimensional array, something like:
wave = np.zeros((2, 2, 2), np.float)
([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0., 0.]]])
And you want to insert a two dimensional array, returned from your function like:
([[ 1., 2.],
[ 3., 4.]])
Such that your 3D array is now:
([[[1., 2.],
[3., 4.]],
[[0., 0.],
[0., 0.]]])
After the first iteration of your for loop. If that is correct, then it's actually pretty simple and you're most of the way there. You can assign an "element" to your 3D array that is a 2D array as long as you select the correct entry:
for z in range(nsteps):
waves_sequence[z] = make_wave_snapshot(size,wavelength,(2*math.pi*z/nsteps))

How do I add a column to a python (matix) multi-dimensional array? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What's the simplest way to extend a numpy array in 2 dimensions?
I've been frustrated as a Matlab user switching over to python because I don't know all the tricks and get stuck hacking together code until it works. Below is an example where I have a matrix that I want to add a dummy column to. Surely, there is a simpler way then the zip vstack zip method below. It works, but it is totally a noob attempt. Please enlighten me. Thank you in advance for taking the time for this tutorial.
# BEGIN CODE
from pylab import *
# Find that unlike most things in python i must build a dummy matrix to
# add stuff in a for loop.
H = ones((4,10-1))
print "shape(H):"
print shape(H)
print H
### enter for loop to populate dummy matrix with interesting data...
# stuff happens in the for loop, which is awesome and off topic.
### exit for loop
# more awesome stuff happens...
# Now I need a new column on H
H = zip(*vstack((zip(*H),ones(4)))) # THIS SEEMS LIKE THE DUMB WAY TO DO THIS...
print "shape(H):"
print shape(H)
print H
# in conclusion. I found a hack job solution to adding a column
# to a numpy matrix, but I'm not happy with it.
# Could someone educate me on the better way to do this?
# END CODE
Use np.column_stack:
In [12]: import numpy as np
In [13]: H = np.ones((4,10-1))
In [14]: x = np.ones(4)
In [15]: np.column_stack((H,x))
Out[15]:
array([[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]])
In [16]: np.column_stack((H,x)).shape
Out[16]: (4, 10)
There are several functions that let you concatenate arrays in different dimensions:
np.vstack along axis=0
np.hstack along axis=1
np.dstack along axis=2
In your case, the np.hstack looks what you want. np.column_stack stacks a set 1D arrays as a 2D array, but you have already a 2D array to start with.
Of course, nothing prevents you to do it the hard way:
>>> new = np.empty((a.shape[0], a.shape[1]+1), dtype=a.dtype)
>>> new.T[:a.shape[1]] = a.T
Here, we created an empty array with an extra column, then used some tricks to set the first columns to a (using the transpose operator T, so that new.T has an extra row compared to a.T...)

Categories