Discretize only a certain arrrays in a tensor with TensorFlow - python

I have the following array:-
import numpy as np
import tensorflow as tf
input = np.array([[-1.5, 1.0, 3.4, .5], [0.0, 3.0, 1.3, 0.0]])
layer = tf.keras.layers.Discretization(num_bins=2, epsilon=0.01)
layer.adapt(input)
layer(input)
<tf.Tensor: shape=(2, 4), dtype=int64, numpy=
array([[0, 1, 1, 1],
[0, 1, 1, 0]])>
This discretizes the whole tensor. I would like to know if there is a way through which I can just discretize the second array in the tensor.

We can create a mask based on the index of the array that needs to be discretized:
def get_mask(x, array_index):
x = tf.Variable(tf.ones_like(input, dtype=tf.float32))
indices = tf.Variable(array_index, dtype=tf.int32)
updates = tf.Variable(tf.zeros( (indices.shape[0], x.shape[1])), dtype=tf.float32)
return tf.compat.v1.scatter_nd_update(x, indices, updates)
And calling
> mask = get_mask(input, np.array([[1]])) #second array
>
> returns the mask of:
array([[1., 1., 1., 1.],
[0., 0., 0., 0.]])
Then we can apply mask: tf.cast(layer(input), tf.float32) * (1-mask) + input*mask which returns:
array([[-1.5, 1. , 3.4, 0.5],
[ 0. , 1. , 1. , 0. ]]
The above should work for any array and any array index to discretize.

Related

How to implement Multinomial conditional distributions depending on the conditional binary value in Tensorflow Probability?

I am trying to build a graphical model in Tensorflow Probability, where we first sample a number of positive (1) and negative (0) examples (count_i) from Categorical distribution and then construct Multinomial distribution (Y_i) depending on the value of (count_i). These events (Y_i) are mutually exclusive :
Y_1 ~ Multinomial([.9, 0.1, 0.05, 0.05, 0.1], total_count = [tf.reduce_sum(tf.cast(count==1, tf.float32))
Y_2 ~ Multinomial([0.99, 0.01, 0., 0., 0.], total_count = [tf.reduce_sum(tf.cast(count==0, tf.float32))
I have read these tutorials, however I am stuck with two issues:
This code generates two arrays of length 500, whereas I only need 1 array of 500. What should I change so we only get 1 sample from Categorical distribution and then depending on the overall count of the value we are conditioning on, Multinomial is constructed ?
The sample from Categorical distribution gives only values of 0, whereas it should be a blend between 0 and 1. What am I doing wrong here?
My code is as follows. You can run these to replicate the behaviour:
def simplied_model():
return tfd.JointDistributionSequential([
tfd.Uniform(low=0., high = 1., name = 'e'), #e
lambda e: tfd.Sample(tfd.Categorical(probs = tf.stack([e, 1.-e], 0)), sample_shape =500), #count #should it be independent?
lambda count: tfd.Multinomial(probs = tf.constant([[.9, 0.1, 0.05, 0.05, 0.1], [0.99, 0.01, 0., 0., 0.]]), total_count = tf.cast(tf.stack([tf.reduce_sum(tf.cast(count==1, tf.float32)),tf.reduce_sum(tf.cast(count==0, tf.float32))], 0), dtype= tf.float32))
])
tt = simplied_model()
tt.resolve_graph()
tt.sample(1)
The first array will be your Y_{1} and the second will be your Y_{2}. The key is that your output will always be of shape (2, 5) because that is the length of the probabilities you are passing to tfd.Multinomial.
Code:
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability import distributions as tfd
# helper function
def _get_counts(vec):
zeros = tf.reduce_sum(tf.cast(vec == 0, tf.float32))
ones = tf.reduce_sum(tf.cast(vec == 1, tf.float32))
return tf.stack([ones, zeros], 0)
joint = tfd.JointDistributionSequential([
tfd.Sample( # sample from uniform to make it 2D
tfd.Uniform(0., 1., name="e"), 1),
lambda e: tfd.Sample(
tfd.Categorical(probs=tf.stack([e, 1.-e], -1)), 500),
lambda c: tfd.Multinomial(
probs=[
[0.9, 0.1, 0.05, 0.05, 0.1],
[0.99, 0.01, 0., 0., 0.],
],
total_count=_get_counts(c),
)
])
joint.sample(5) # or however many you want to sample
Output:
# [<tf.Tensor: shape=(5, 1), dtype=float32, numpy=
# array([[0.5611458 ],
# [0.48223293],
# [0.6097224 ],
# [0.94013655],
# [0.14861858]], dtype=float32)>,
# <tf.Tensor: shape=(5, 1, 500), dtype=int32, numpy=
# array([[[1, 0, 0, ..., 1, 0, 1]],
#
# [[1, 1, 1, ..., 1, 0, 0]],
#
# [[0, 0, 0, ..., 1, 0, 0]],
#
# [[0, 0, 0, ..., 0, 0, 0]],
#
# [[1, 0, 1, ..., 1, 0, 1]]], dtype=int32)>,
# <tf.Tensor: shape=(2, 5), dtype=float32, numpy=
# array([[ 968., 109., 0., 0., 0.],
# [1414., 9., 0., 0., 0.]], dtype=float32)>]

Exponential of SparseTensor with mapping

I want to take the exp of each element in the sparse matrix. Here is a simple example:
a = np.array([[1, 0, 2, 0], [3, 0, 0, 4]])
a_t = tf.constant(a)
a_s = tf.sparse.from_dense(a_t)
tf.exp(a_s)
But this gives the followig error:
ValueError: Attempt to convert a value (<tensorflow.python.framework.sparse_tensor.SparseTensor object at 0x149fd57f0>) with an unsupported type (<class 'tensorflow.python.framework.sparse_tensor.SparseTensor'>) to a Tensor.
Can you please help me to sort this out without converting this to dense matrix?
If you have Tensorflow 2.4, you can use tf.sparse.map_values:
import tensorflow as tf
import numpy as np
a = np.array([[1., 0., 2., 0.],
[3., 0., 0., 4.]])
a_t = tf.constant(a)
a_s = tf.sparse.from_dense(a_t)
Here is the magic:
tf.sparse.to_dense(tf.sparse.map_values(tf.exp, a_s))
<tf.Tensor: shape=(2, 4), dtype=float64, numpy=
array([[ 2.71828183, 0. , 7.3890561 , 0. ],
[20.08553692, 0. , 0. , 54.59815003]])>
Note that tf.sparse.to_dense is only there so we can visualize the result. Also, I had to convert your values to floating point.

Assign 1d numpy ndarray into columns of a 2d array

Assume dst is an ndarray with shape (5, N), and ramp is an ndarray with shape (5,). (In this case, N = 2):
>>> dst = np.zeros((5, 2))
>>> dst
array([[0., 0.],
[0., 0.],
[0., 0.],
[0., 0.],
[0., 0.]])
>>> ramp = np.linspace(1.0, 2.0, 5)
>>> ramp
array([1. , 1.25, 1.5 , 1.75, 2. ])
Now I'd like to copy ramp into the columns of dst, resulting in this:
>>> dst
array([[1., 1.],
[1.25., 1.25.],
[1.5., 1.5.],
[1.75, 1.75],
[2.0, 2.0]])
I didn't expect this to work, and it doesn't:
>>> dst[:] = ramp
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: could not broadcast input array from shape (5) into shape (5,2)
This works, but I'm certain there's a more "numpyesque" way to accomplish this:
>>> dst[:] = ramp.repeat(dst.shape[1]).reshape(dst.shape)
>>> dst
array([[1. , 1. ],
[1.25, 1.25],
[1.5 , 1.5 ],
[1.75, 1.75],
[2. , 2. ]])
Any ideas?
note
Unlike "Cloning" row or column vectors, I want to assign ramp into dst (or even a subset of dst). In addition, the solution given there uses a python array as the source, not an ndarray, and thus requires calls to .transpose, etc.
Method 1: Use broadcasting:
As OP mentioned in the comment. Broadcasting works on assigment too
dst[:] = ramp[:,None]
Method 2: Use column_stack
N = dst.shape[1]
dst[:] = np.column_stack([ramp.tolist()]*N)
Out[479]:
array([[1. , 1. ],
[1.25, 1.25],
[1.5 , 1.5 ],
[1.75, 1.75],
[2. , 2. ]])
Method 3: use np.tile
N = dst.shape[1]
dst[:] = np.tile(ramp[:,None], (1,N))

Understanding axes in NumPy

I was going through NumPy documentation, and am not able to understand one point. It mentions, for the example below, the array has rank 2 (it is 2-dimensional). The first dimension (axis) has a length of 2, the second dimension has a length of 3.
[[ 1., 0., 0.],
[ 0., 1., 2.]]
How does the first dimension (axis) have a length of 2?
Edit:
The reason for my confusion is the below statement in the documentation.
The coordinates of a point in 3D space [1, 2, 1] is an array of rank
1, because it has one axis. That axis has a length of 3.
In the original 2D ndarray, I assumed that the number of lists identifies the rank/dimension, and I wrongly assumed that the length of each list denotes the length of each dimension (in that order). So, as per my understanding, the first dimension should be having a length of 3, since the length of the first list is 3.
In numpy, axis ordering follows zyx convention, instead of the usual (and maybe more intuitive) xyz.
Visually, it means that for a 2D array where the horizontal axis is x and the vertical axis is y:
x -->
y 0 1 2
| 0 [[1., 0., 0.],
V 1 [0., 1., 2.]]
The shape of this array is (2, 3) because it is ordered (y, x), with the first axis y of length 2.
And verifying this with slicing:
import numpy as np
a = np.array([[1, 0, 0], [0, 1, 2]], dtype=np.float)
>>> a
Out[]:
array([[ 1., 0., 0.],
[ 0., 1., 2.]])
>>> a[0, :] # Slice index 0 of first axis
Out[]: array([ 1., 0., 0.]) # Get values along second axis `x` of length 3
>>> a[:, 2] # Slice index 2 of second axis
Out[]: array([ 0., 2.]) # Get values along first axis `y` of length 2
You may be confusing the other sentence with the picture example below. Think of it like this: Rank = number of lists in the list(array) and the term length in your question can be thought of length = the number of 'things' in the list(array)
I think they are trying to describe to you the definition of shape which is in this case (2,3)
in that post I think the key sentence is here:
In NumPy dimensions are called axes. The number of axes is rank.
If you print the numpy array
print(np.array([[ 1. 0. 0.],[ 0. 1. 2.]])
You'll get the following output
#col1 col2 col3
[[ 1. 0. 0.] # row 1
[ 0. 1. 2.]] # row 2
Think of it as a 2 by 3 matrix... 2 rows, 3 columns. It is a 2d array because it is a list of lists. ([[ at the start is a hint its 2d)).
The 2d numpy array
np.array([[ 1. 0., 0., 6.],[ 0. 1. 2., 7.],[3.,4.,5,8.]])
would print as
#col1 col2 col3 col4
[[ 1. 0. , 0., 6.] # row 1
[ 0. 1. , 2., 7.] # row 2
[3., 4. , 5., 8.]] # row 3
This is a 3 by 4 2d array (3 rows, 4 columns)
The first dimensions is the length:
In [11]: a = np.array([[ 1., 0., 0.], [ 0., 1., 2.]])
In [12]: a
Out[12]:
array([[ 1., 0., 0.],
[ 0., 1., 2.]])
In [13]: len(a) # "length of first dimension"
Out[13]: 2
The second is the length of each "row":
In [14]: [len(aa) for aa in a] # 3 is "length of second dimension"
Out[14]: [3, 3]
Many numpy functions take axis as an argument, for example you can sum over an axis:
In [15]: a.sum(axis=0)
Out[15]: array([ 1., 1., 2.])
In [16]: a.sum(axis=1)
Out[16]: array([ 1., 3.])
The thing to note is that you can have higher dimensional arrays:
In [21]: b = np.array([[[1., 0., 0.], [ 0., 1., 2.]]])
In [22]: b
Out[22]:
array([[[ 1., 0., 0.],
[ 0., 1., 2.]]])
In [23]: b.sum(axis=2)
Out[23]: array([[ 1., 3.]])
Keep the following points in mind when considering Numpy axes:
Each sub-level of a list (or array) represents an axis. For example:
import numpy as np
a = np.array([1,2]) # 1 axis
b = np.array([[1,2],[3,4]]) # 2 axes
c = np.array([[[1,2],[3,4]],[[5,6],[7,8]]]) # 3 axes
Axis labels correspond to the level of the sub-list they represent, starting with axis 0 for the outer most list.
To illustrate this, consider the following array of different shape, each with 24 elements:
# 1D Array
a0 = np.array(
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]
)
a0.shape # (24,) - here, the length along the 0-axis is 24
# 2D Array
a01 = np.array(
[
[1.1, 1.2, 1.3, 1.4],
[2.1, 2.2, 2.3, 2.4],
[3.1, 3.2, 3.3, 3.4],
[4.1, 4.2, 4.3, 4.4],
[5.1, 5.2, 5.3, 5.4],
[6.1, 6.2, 6.3, 6.4]
]
)
a01.shape # (6, 4) - now, the length along the 0-axis is 6
# 3D Array
a012 = np.array(
[
[
[1.1.1, 1.1.2],
[1.2.1, 1.2.2],
[1.3.1, 1.3.2]
],
[
[2.1.1, 2.1.2],
[2.2.1, 2.2.2],
[2.3.1, 2.3.2]
],
[
[3.1.1, 3.1.2],
[3.2.1, 3.2.2],
[3.3.1, 3.3.2]
],
[
[4.1.1, 4.1.2],
[4.2.1, 4.2.2],
[4.3.1, 4.3.2]
]
)
a012.shape # (4, 3, 2) - and finally, the length along the 0-axis is 4

apply numpy.histogram to multidimensional array

I want to apply numpy.histogram() to a multi-dimensional array along an axis.
Say, for example I have a 2D array and I want to apply histogram() along axis=1.
Code:
import numpy
array = numpy.array([[0.6, 0.7, -0.3, 1.0, -0.8], [0.2, -1.0, -0.5, 0.5, 0.8],
[0.25, 0.3, -0.1, -0.8, 1.0]])
bins = [-1.0, -0.5, 0, 0.5, 1.0, 1.0]
hist, bin_edges = numpy.histogram(array, bins)
print(hist)
Output:
[3 3 3 4 2]
Expected Output:
[[1 1 0 2 1],
[1 1 1 2 0],
[1 1 2 0 1]]
How can I get my expected output?
I tried to use the solution suggested in this post, but it doesn't get me to the expected output.
For n-d cases, you can do this with np.histogram2d just by making a dummy x-axis (i):
def vec_hist(a, bins):
i = np.repeat(np.arange(np.product(a.shape[:-1]), a.shape[-1]))
return np.histogram2d(i, a.flatten(), (a.shape[0], bins)).reshape(a.shape[:-1], -1)
Output
vec_hist(array, bins)
Out[453]:
(array([[ 1., 1., 0., 2., 1.],
[ 1., 1., 1., 2., 0.],
[ 1., 1., 2., 0., 1.]]),
array([ 0. , 0.66666667, 1.33333333, 2. ]),
array([-1. , -0.5 , 0. , 0.5 , 0.9999999, 1. ]))
For histograms over arbitrary axis, you'll probably need to create i using np.meshgrid and np.ravel_multi_axis and then use that to reshape the resulting histogram.

Categories