Related
So I have a 2D tf.float32 Tensor of xyz coords and a 1D tf.int32 Tensor of segment_ids.
I want to subtract every point from the mean of the corresponding segment_id.
Please check the code below:
x_index = tf.constant([1, 1, 2, 2])
y_index = tf.constant([1, 1, 3, 4])
points = tf.constant([[0.1, 0.1, 0.1],
[0.11, 0.11, 0.11],
[0.2, 0.3, 0.1],
[0.2, 0.4, 0.1]])
points_x_y_indices = tf.transpose(tf.stack([x_index, y_index]))
uniques, idx = tf.raw_ops.UniqueV2(x=points_x_y_indices, axis=[0], out_idx=tf.dtypes.int32)
n_pillars = int(tf.reduce_max(idx))+1
x_means = tf.math.unsorted_segment_mean(points[:, 0], idx, n_pillars)
y_means = tf.math.unsorted_segment_mean(points[:, 1], idx, n_pillars)
z_means = tf.math.unsorted_segment_mean(points[:, 2], idx, n_pillars)
Now, I have the means over every segment_id in x_means, y_means and z_means. How can I subtract those values from original points tensor?? of course without looping as I am trying to avoid tf.py_func
Thanks!
I figured it out, you can use
full_x_means = tf.gather(x_means, idx)
full_y_means = tf.gather(y_means, idx)
full_z_means = tf.gather(z_means, idx)
Then
pillar_points_xc = points[:, 0] - full_x_means
pillar_points_yc = points[:, 1] - full_y_means
pillar_points_zc = points[:, 2] - full_z_means
I have encoded my images(masks) with dimensions (img_width x img_height x 1) with OneHotEncoder in this way:
import numpy as np
def OneHotEncoding(im,n_classes):
one_hot = np.zeros((im.shape[0], im.shape[1], n_classes),dtype=np.uint8)
for i, unique_value in enumerate(np.unique(im)):
one_hot[:, :, i][im == unique_value] = 1
return one_hot
After doing some data manipulation with deep learning, softmax activation function will result in probabilities instead of 0 and 1 values, so in my Decoder I wanted to implement the following approach:
Threshold the output to obtain 0 or 1 only.
Multiply each channel with weight equal to the channel index.
take the max between labels along channels axis.
import numpy as np
arr = np.array([
[[0.1,0.2,0,5],[0.2,0.4,0.7],[0.3,0.5,0.8]],
[[0.3,0.6,0 ],[0.4,0.9,0.1],[0 ,0 ,0.2]],
[[0.7,0.1,0.1],[0,6,0.1,0.1],[0.6,0.6,0.3]],
[[0.6,0.2,0.3],[0.4,0.5,0.3],[0.1,0.2,0.7]]
])
# print(arr.dtype,arr.shape)
def oneHotDecoder(img):
# Thresholding
img[img<0.5]=0
img[img>=0.5]=1
# weigts of the labels
img = [i*img[:,:,i] for i in range(img.shape[2])]
# take the max label
img = np.amax(img,axis=2)
print(img.shape)
return img
arr2 = oneHotDecoder(arr)
print(arr2)
My questions is:
How to git rid of the error:
line 15, in oneHotDecoder
img[img<0.5]=0 TypeError: '<' not supported between instances of 'list' and 'float'
Is there any other issues in my implementaion that you suggest to improve?
Thanks in advance.
You have typos with commas and dots with some of your items (e.g. your first list should be [0.1, 0.2, 0.5] instead of [0.1, 0.2, 0, 5]).
The fixed list is:
l = [
[[0.1,0.2,0.5],[0.2,0.4,0.7],[0.3,0.5,0.8]],
[[0.3,0.6,0 ],[0.4,0.9,0.1],[0 ,0 ,0.2]],
[[0.7,0.1,0.1],[0.6,0.1,0.1],[0.6,0.6,0.3]],
[[0.6,0.2,0.3],[0.4,0.5,0.3],[0.1,0.2,0.7]]
]
Then you could do:
np.array(l) # np.dstack(l) would work as well
Which would yield:
array([[[0.1, 0.2, 0.5],
[0.2, 0.4, 0.7],
[0.3, 0.5, 0.8]],
[[0.3, 0.6, 0. ],
[0.4, 0.9, 0.1],
[0. , 0. , 0.2]],
[[0.7, 0.1, 0.1],
[0.6, 0.1, 0.1],
[0.6, 0.6, 0.3]],
[[0.6, 0.2, 0.3],
[0.4, 0.5, 0.3],
[0.1, 0.2, 0.7]]])
I have this exercise where I get to build a simple neural network with one input layer and one hidden layer... I made the code below to perform a simple matrix multiplication, but it's not doing it properly as when I do the multiplication by hand. What am I doing wrong in my code?
#toes %win #fans
ih_wgt = ([0.1, 0.2, -0.1], #hid[0]
[-0.1, 0.1, 0.9], #hid[1]
[0.1, 0.4, 0.1]) #hid[2]
#hid[0] hid[1] #hid[2]
ho_wgt = ([0.3, 1.1, -0.3], #hurt?
[0.1, 0.2, 0.0], #win?
[0.0, 1.3, 0.1]) #sad?
weights = [ih_wgt, ho_wgt]
def w_sum(a,b):
assert(len(a) == len(b))
output = 0
for i in range(len(a)):
output += (a[i] * b[i])
return output
def vect_mat_mul(vec, mat):
assert(len(vec) == len(mat))
output = [0, 0, 0]
for i in range(len(vec)):
output[i]= w_sum(vec, mat[i])
return output
def neural_network(input, weights):
hid = vect_mat_mul(input, weights[0])
pred = vect_mat_mul(hid, weights[1])
return pred
toes = [8.5, 9.5, 9.9, 9.0]
wlrec = [0.65, 0.8, 0.8, 0.9]
nfans = [1.2, 1.3, 0.5, 1.0]
input = [toes[0],wlrec[0],nfans[0]]
pred = neural_network(input, weights)
print(pred)
the output of my code is:
[0.258, 0, 0]
The way I attempted to solve it by hand is as follows:
I multiplied the input vector [8.5, 0.65, 1.2] with the input weight matrix
ih_wgt = ([0.1, 0.2, -0.1], #hid[0]
[-0.1, 0.1, 0.9], #hid[1]
[0.1, 0.4, 0.1]) #hid[2]
[0.86, 0.295, 1.23]
the output vector is then fed into the network as an input vector which is then multiplied by the hidden weight matrix
ho_wgt = ([0.3, 1.1, -0.3], #hurt?
[0.1, 0.2, 0.0], #win?
[0.0, 1.3, 0.1]) #sad?
the correct output prediction:
[0.2135, 0.145, 0.5065]
Your help would be much appreciated!
You're almost there! Only a simple indentation thing is the reason:
def vect_mat_mul(vec, mat):
assert(len(vec) == len(mat))
output = [0, 0, 0]
for i in range(len(vec)):
output[i]= w_sum(vec, mat[i])
return output # <-- This one was inside the for loop
Say that I have an 2d array ar like this:
0.9, 0.1, 0.3
0.4, 0.5, 0.1
0.5, 0.8, 0.5
And I want to sample from [1, 0] according to this probability array.
rdchoice = lambda x: numpy.random.choice([1, 0], p=[x, 1-x])
I have tried two methods:
1) reshape it into a 1d array first and use numpy.random.choice and then reshape it back to 2d:
np.array(list(map(rdchoice, ar.reshape((-1,))))).reshape(ar.shape)
2) use the vectorize function.
func = numpy.vectorize(rdchoice)
func(ar)
But these two ways are all too slow, and I learned that the nature of the vectorize is a for-loop and in my experiments, I found that map is no faster than vectorize.
I thought this can be done faster. If the 2d array is large it would be unbearably slow.
You should be able to do this like so:
>>> p = np.array([[0.9, 0.1, 0.3], [0.4, 0.5, 0.1], [0.5, 0.8, 0.5]])
>>> (np.random.rand(*p.shape) < p).astype(int)
Actually I can use the np.random.binomial:
import numpy as np
p = [[0.9, 0.1, 0.3],
[0.4, 0.5, 0.1],
[0.5, 0.8, 0.5]]
np.random.binomial(1, p)
I have the following task: having two vectors
[v_1, ..., v_n] and [w_1, ..., w_n] build new vector [v_1] * w_1 + ... + [v_n] * w_n.
For exmaple for v = [0.5, 0.1, 0.7] and w = [2, 3, 0] the result will be
[0.5, 0.5, 0.1, 0.1, 0.1].
In case of using vanilla python, the solution would be
v, w = [...], [...]
res = []
for i in range(len(v)):
res += [v[i]] * w[i]
Is it possible to build such code within TensorFlow function? It seems to be an extension of tf.boolean_mask with additional argument like weights or repeats.
Here is a simple solution using tf.sequence_mask:
import tensorflow as tf
v = tf.constant([0.5, 0.1, 0.7])
w = tf.constant([2, 3, 0])
m = tf.sequence_mask(w)
v2 = tf.tile(v[:, None], [1, tf.shape(m)[1]])
res = tf.boolean_mask(v2, m)
sess = tf.InteractiveSession()
print(res.eval())
# array([0.5, 0.5, 0.1, 0.1, 0.1], dtype=float32)