Writing binary in python and read in c++ - python

writing binary in python.
# curr_feat is **numpy array**
# curr_feat's shape is **(64,)**
# curr_feat dtype is **float64**
"""
array([1., 1., 1., 0., 1., 1., 1., 1., 1., 1., 0., 1., 0., 1., 1., 1., 0.,
0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1.])
"""
curr_feat.tofile(f)
read in C++
fp=fopen(feat_file,"rb");
fread(tmp.data(),(maxDim)*sizeof(bool),1,fp);
fclose(fp);
read value is different from write value.
What should I do to write/read correctly?

Your fread doesn't look right. Try this:
fread(tmp.data(),sizeof(double),maxDim,fp);
This assumes that tmp is std::vector<double> and you've already resized it to maxDim. Trying to read into a vector that is not of sufficient size would be undefined behavior.

tofile is written as a text file by default. Use
curr_feat.tofile(f, sep='') # empty sep
to write as binary.

Related

How to create an "islands" style pytorch matrix

Probably a simple question, hopefully with a simple solution:
I am given a (sparse) 1D boolean tensor of size [1,N].
I would like to produce a 2D tensor our of it of size [N,N], containing islands which are induced by the 1D tensor. It will be the easiest to observe the following image example, where the upper is the 1D boolean tensor, and the matrix below represents the resulted matrix:
Given a mask input:
>>> x = torch.tensor([0,0,0,1,0,0,0,0,1,0,0])
You can retrieve the indices with torch.diff:
>>> index = x.nonzero()[:,0].diff(prepend=torch.zeros(1), append=torch.ones(1)*len(x))
tensor([3., 5., 3.])
Then use torch.block_diag to create the diagonal block matrix:
>>> torch.block_diag(*[torch.ones(i,i) for i in index.int()])
tensor([[1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1.]])

Error message with sklearn function " 'RocCurveDisplay' has no attribute 'from_predictions' "

I am trying to use the sklearn function RocCurveDisplay.from_predictions as presented in https://scikit-learn.org/stable/modules/generated/sklearn.metrics.RocCurveDisplay.html#sklearn.metrics.RocCurveDisplay.from_predictions.
I run the function like this:
from sklearn.metrics import RocCurveDisplay
true = np.array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0.])
prediction = np.array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0.,
1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 1., 1., 0., 0.])
RocCurveDisplay.from_predictions(true,prediction)
plt.show()
I get the error message "AttributeError: type object 'RocCurveDisplay' has no attribute 'from_predictions' ".
Is this a version problem? I am using the latest one, 0.24.1
Your version 0.24.1 is not the latest version, You'll need to upgrade to scikit-learn 1.0 as from_predictions is supported in 1.0
You can upgrade using:
pip install --upgrade scikit-learn

How can I set precision when printing a PyTorch tensor with integers?

I have:
mask = mask_model(input_spectrogram)
mask = torch.round(mask).float()
torch.set_printoptions(precision=4)
print(mask.size(), input_spectrogram.size(), masked_input.size())
print(mask[0][-40:], mask[0].size())
This prints:
tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0.], grad_fn=<SliceBackward>) torch.Size([400])
But I want it to print 1.0000 instead of 1.. Why won't my set_precision do it? Even when I converted to float()?
Unfortunately, this is simply how PyTorch displays tensor values. Your code works fine, if you do print(mask * 1.1), you can see that PyTorch does indeed print out 4 decimal values when the tensor values can no longer be represented as integers.

Python analogue of "ktensor" function from matlab

Is there any Python package that can build tensor by calculating outer products of matrix columns?
Like "ktensor" function from matlab
https://www.tensortoolbox.org/ktensor_doc.html
You can do this using the parafac function from the tensorly package. From their documentation:
import numpy as np
import tensorly as tl
tensor = tl.tensor([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 1., 1., 0., 0., 0., 0.],
[ 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0.],
[ 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0.],
[ 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0.],
[ 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0.],
[ 0., 0., 0., 0., 1., 1., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
from tensorly.decomposition import parafac
factors = parafac(tensor, rank=2)
factors now holds the list of matrices that form the decomposition.
As mentioned in the previous answer, you can use TensorLy for manipulating tensors, including tensors in CP form ("Kruskal tensors"). The tensorly.kruskal_tensor module handles these particular operations.
If you want to reconstruct a Kruskal tensor from the factors (and optional vector of weights), you can use kruskal_to_tensor from TensorLy:
full_tensor = kruskal_to_tensor(kruskal_tensor)
or, equivalently:
full_tensor = kruskal_to_tensor((weights, factors))
If your decomposition has rank K, weights is a vector of length R and factors a list of factors with R columns each.
You can also directly use the underlying functions such as outer-product or Khatri-Rao.
As mentioned in the previous answer, if you have a full tensor, you can apply CP (parafac) decomposition to approximate the factors.

Switch triangular matrix

Is there an easy way to turn around a triangular matrix.
import numpy as np
shape=(4,8)
x3=np.ones(shape)
for m in range(len(x3)):
step = (m * int(2)+1) #per step of 2 zeros
for n in range(int(step), len(x3[m])):
x3[m][n] = 0
Gives me this matrix:
array([[1., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 0.]])
I want to switch this to something like this:
array([[0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 1., 1., 1.],
[0., 0., 0., 1., 1., 1., 1., 1.],
[0., 1., 1., 1., 1., 1., 1., 1.]])
Is there a simple way of doing this?
np.flip from numpy package does the trick :
A = array([[1., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 0.]])
np.flip(A, 1)
#returns what you want : 1 for vertical symetry
array([[0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 1., 1., 1.],
[0., 0., 0., 1., 1., 1., 1., 1.],
[0., 1., 1., 1., 1., 1., 1., 1.]])

Categories