Testing tf.keras.layrs.RandomTranslation on 1D data in tensorflow - python

I'm trying to randomly translate 1D vectors as they get passed into my tensorflow model. I wanted to check how this would affect my data so I can scale the random translation amount properly, but every time I pass my data into the layer, the output is unchanged. Here is my standalone example:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
layer = tf.keras.layers.RandomTranslation(
height_factor=0.1,
width_factor=.1,
fill_mode='reflect',
interpolation='bilinear',
seed=None)
input_data = np.random.random((50,100,1,1))
check = layer(input_data)
check = check.numpy().reshape(-1, 100)
input_data = input_data.reshape(-1,100)
for i in range(2):
plt.plot(input_data[i], 'blue')
plt.plot(check[i], 'orange')
The image output is
What do I need to do to get this layer to work? I've tried adding dimensions but it didn't help. Is this because the "model" isn't in training mode?

Related

Why use (regressor.layers[0].input, regressor.layers[-1].output) instead of just regressor in DeepExplainer?

Hi everyone i came across an example of how to use shap on lstm Time-step wise feature importance in deep learning using SHAP. I'm curious why the author chose to use
e = shap.DeepExplainer((regressor.layers[0].input,
regressor.layers[-1].output),data)
instead of just
e = shap.DeepExplainer(regressor,data)
I suspect the reason is very important but I cannot be sure. Anyone can shed some light on this?
Partial code below
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from keras.models import load_model
import shap
regressor = load_model(‘lstm_stock.h5’)
pred_x = regressor.predict_classes(X_train)
random_ind = np.random.choice(X_train.shape[0], 1000,
replace=False)
print(random_ind)
data = X_train[random_ind[0:500]]
e = shap.DeepExplainer((regressor.layers[0].input,
regressor.layers[-1].output),data)
test1 = X_train[random_ind[500:1000]]
shap_val = e.shap_values(test1)
shap_val = np.array(shap_val)...
The solution is quite simple. Let's look at the DeepExplainer documentation. This is the __init__ function:
__init__(model, data, session=None, learning_phase_flags=None)
Your confusion is about the first argument, that is, model. According to the documentation, for Tensorflow, model is:
a pair of TensorFlow tensors (or a list and a tensor) that specifies
the input and output of the model to be explained.
So that's it. The first argument is just a pair indicating the input and the output of the model. In this case:
(regressor.layers[0].input, regressor.layers[-1].output)
Update:
In the example Front Page DeepExplainer MNIST Example it is however shown this piece of code:
import shap
import numpy as np
# select a set of background examples to take an expectation over
background = x_train[np.random.choice(x_train.shape[0], 100, replace=False)]
# explain predictions of the model on three images
e = shap.DeepExplainer(model, background)
# ...or pass tensors directly
# e = shap.DeepExplainer((model.layers[0].input, model.layers[-1].output), background)
shap_values = e.shap_values(x_test[1:5])
From this it seems that, besides the pair of tensors, it is also possible to use the model as input, just like it is possible for Pytorch. Indeed in Pytorch, instead of the pair, you can also use a nn.Module object.
At this point then I guess that the one for Tensorflow is just an undocumented feature. It should then be equivalent to use directly the model or the pair of tensors.

Tensorflow 2.3. How to change the batch for each epoch?

I would like to train with a different custom image augmentation during each epoch in the training.
The wrong solution would be to save the augmented images, and run the training on the saved images. Because if you try to loads hundreds of thousands of images for the training, you will get a memory error.
The right solution will have to use augmentation during the fit routine.
Can you please indicate me how to do it, pointing out a working example?
It won't create many images, and you won't get a memory error. While iterating through the dataset, it will apply random transformations to the image, without "creating" new images that will be saved in memory. So just do it normally:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
import tensorflow_datasets as tfds
[train_set_raw] = tfds.load('cats_vs_dogs', split=['train[:100]'], as_supervised=True)
def augment(tensor):
tensor = tf.cast(x=tensor, dtype=tf.float32)
tensor = tf.image.rgb_to_grayscale(images=tensor)
tensor = tf.image.resize(images=tensor, size=(96, 96))
tensor = tf.divide(x=tensor, y=tf.constant(255.))
tensor = tf.image.random_flip_left_right(image=tensor)
tensor = tf.image.random_brightness(image=tensor, max_delta=2e-1)
tensor = tf.image.random_crop(value=tensor, size=(64, 64, 1))
return tensor
train_set_raw = train_set_raw.shuffle(128).map(lambda x, y: (augment(x), y)).batch(16)
import matplotlib.pyplot as plt
plt.imshow((next(iter(train_set_raw))[0][0][..., 0].numpy()*255).astype(int))
plt.show()

CNN with vector output and 2D image graph input (input is an array)

I am trying to create a CNN in Keras (Python 3.7) which ingests a 2D matrix input (much like a grayscale image) and outputs a 1 dimensional vector. So far I did manage to get results, but I am not sure if what I am doing is correct (or if my intuition is).
I input a 100x50 array into my convolutional layer. This 2D array holds the peak information at every position (ie. x axis pertains to the position, y-axis pertains to the frequency, and each cell gives the intensity). The 3D graph of this shows something akin to the one given in this link.
From the (all of the) literature I have read, I learned that CNN accepts image data--image is converted into pixel values and then repeatedly convolved and pooled to get the output. However, I am using a MatLab simulator to get my input data, and I have access to the raw 2D array containing information on the peak frequency at each point.
My intuition is this: if we normalize each cell and feed the information to the CNN, it will be as if I fed the normalized pixel values of the image to the CNN, since my raw 2D array also has height, width and depth=1, like an image.
Please enlighten me if my thinking is correct or wrong.
My code is as follows:
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
import keras
'''load sample input'''
BGS1 = pd.read_csv("C:/Users/strain1_input.csv")
BGS2 = pd.read_csv("C:/Users/strain2_input.csv")
BGS3 = pd.read_csv("C:/Users/strain3_input.csv")
BGS_ = np.array([BGS1, BGS2, BGS3]) #3x100x50 array
BGS_normalized = BGS_/np.amax(BGS_)
'''load sample output'''
BFS1 = pd.read_csv("C:/Users/strain1_output.csv")
BFS2 = pd.read_csv("C:/Users/strain2_output.csv")
BFS3 = pd.read_csv("C:/Users/strain3_output.csv")
BFS_ = np.array([BFS1, BFS2, BFS3]) #3x100
BFS_normalized = BFS/50 #since max value for each cell is 50
#after splitting data into training, validation and testing sets,
output_nodes = 100
n_classes = 1
batch_size_ = 8 #so far, optimized for 8 batch size
epoch = 100
input_layer = Input(shape=(45,300,1))
conv1 = Conv2D(16,3,padding="same",activation="relu", input_shape =
(45,300,1))(input_layer)
pool1 = MaxPooling2D(pool_size=(2,2),padding="same")(conv1)
flat = Flatten()(pool1)
hidden1 = Dense(10, activation='softmax')(flat) #relu
batchnorm1 = BatchNormalization()(hidden1)
output_layer = Dense(output_nodes*n_classes, activation="softmax")(batchnorm1)
output_layer2 = Dense(output_nodes*n_classes, activation="relu")(output_layer)
output_reshape = Reshape((output_nodes, n_classes))(output_layer2)
model = Model(inputs=input_layer, outputs=output_reshape)
print(model.summary())
model.compile(loss='mean_squared_error', optimizer='adam', sample_weight_mode='temporal')
model.fit(train_X,train_label,batch_size=batch_size_,epochs=epoch)
predictions = model.predict(train_X)
what you did is exactly the strategy used to input non image data in to 2d convolutional layers. As long the model predicts correctly, what you did is correct. its just that CNN perform very poorly on non-image data or there might be chances to overfit. But then again, as long it performs correctly then its good.

How to convert Tensorflow dataset to 2D numpy array

I have a TensorFlow dataset which contains nearly 15000 multicolored images with 168*84 resolution and a label for each image. Its type and shape are like this:
< ConcatenateDataset shapes: ((168, 84, 3), ()), types: (tf.float32, tf.int32)>
I need to use it to train my network. That's why I need to pass it as a parameter to this function that I built my layers in:
def cnn_model_fn(features, labels, mode):
input_layer = tf.reshape(features["x"], [-1, 168, 84, 3])
# Convolutional Layer #1
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
.
.
.
I tried to convert each tensor into np.array(which is the proper type for the function above, i guess) by using tf.eval() and np.ravel(). But I failed.
So, how can I convert this dataset into the proper type to pass it to the function?
Plus
I am new to python and tensorflow and I don't think I understand why there are datasets if we can not use them directly to build layers (I am following the tutorial in TensorFlow's website btw).
Thanks.
You could try eager execution, previously I gave an answer with session run (showed below).During eager execution using .numpy() on a tensor will convert that tensor to numpy array.Example code (from my use case):
#enable eager execution
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
tf.enable_eager_execution()
print('Is executing eagerly?',tf.executing_eagerly())
#load datasets
import tensorflow_datasets as tfds
dataset, metadata = tfds.load('cycle_gan/horse2zebra',
with_info=True, as_supervised=True)
train_horses, train_zebras = dataset['trainA'], dataset['trainB']
#load dataset in to numpy array
train_A=train_horses.batch(1000).make_one_shot_iterator().get_next()[0].numpy()
print(train_A.shape)
#preview one of the images
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
print(train_A.shape)
plt.imshow(train_A[1])
plt.show()
Old, session run, answer:
I recently had this problem, and I did it like this:
#load datasets
import tf
import tensorflow_datasets as tfds
dataset, metadata = tfds.load('cycle_gan/horse2zebra',
with_info=True, as_supervised=True)
train_horses, train_zebras = dataset['trainA'], dataset['trainB']
#load dataset in to numpy array
sess = tf.compat.v1.Session()
tra=train_horses.batch(1000).make_one_shot_iterator().get_next()
train_A=np.array(sess.run(tra)[0])
print(train_A.shape)
sess.close()
#preview one of the images
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
print(train_A.shape)
plt.imshow(train_A[1])
plt.show()
It doesn't sound like you set up things using the Tensorflow Dataset pipeline, here is the guide for doing so:
https://www.tensorflow.org/programmers_guide/datasets
You can either follow that (it's the right approach, but there's a small learning curve to get used to it), or you can just pass in the numpy array to sess.run as part of the feed_dict parameter. If you go this way then you should just create a tf.placeholder which will be populated by the value in feed_dict. Many of the basic tutorial examples here follow this approach:
https://github.com/aymericdamien/TensorFlow-Examples
I was also needing to accomplish this task (Dataset to array), but without turning on eager mode. I managed to come up with the following:
dataset = tf.data.Dataset.from_tensor_slices([[1,2],[3,4]])
tensor_array = tf.TensorArray(dtype=dataset.element_spec.dtype,
size=0,
dynamic_size=True,
element_shape=dataset.element_spec.shape)
tensor_array = dataset.reduce(tensor_array, lambda a, t: a.write(a.size(), t))
tensor = tf.reshape(tensor_array.concat(), (-1,)+tuple(dataset.element_spec.shape))
array = tf.Session().run(tensor)
print(type(array))
# <class 'numpy.ndarray'>
print(array)
# [[1 2]
# [3 4]]
What this does:
We start with a dataset containing 2 tensors of shape (2,).
Since eager is off, we need to run the dataset through a Tensorflow session. And since a session requires a tensor, we have to convert the dataset into a tensor.
To accomplish this, we use Dataset.reduce() to put all the elements into a TensorArray (symbolically).
We now use TensorArray.concat() to convert the whole array into a single tensor. However when we do this the whole dataset becomes flattened into a 1-D array. So we need tf.reshape() to get it back into our original tensor's shape, plus an extra dimension to stack them all.
Finally we take the tensor and run it through a session. This gives us our numpy ndarray.
This was the simplest method for me for supervised problem with (X, y).
def dataset_to_numpy(ds):
"""
Convert tensorflow dataset to numpy arrays
"""
images = []
labels = []
# Iterate over a dataset
for i, (image, label) in enumerate(tfds.as_numpy(ds)):
images.append(image)
labels.append(label)
for i, img in enumerate(images):
if i < 3:
print(img.shape, labels[i])
return images, labels
Usage:
ds = tfds.load('mnist', split='train', as_supervised=True)
You can use the following methods to get the images and the corresponding captions:
def separate_dataset(dataset):
images, labels = tf.compat.v1.data.make_one_shot_iterator(dataset.batch(len(dataset))).get_next()
return images, labels

How to specify multiple labels for a given data point in keras?

I am trying to solve a classification problem using a sequential keras model.
In Keras, model.fit requires two numpy arrays to train on - data, labels.
This works correctly if each row of the data has one corresponding label.
However, for my use, I have more than one classification possible for a given data point.
Can this be handled in keras? If so, what should be the format of my data and labels numpy array?
Sample inputs could look like this:
data[0] = ['What is the colour of the shirt?']
#This text is converted to a vector using a 300 dimension GloVe embedding layer and then processed.
label[0] = ['Red','Orange','Brown']
I require my model to train such that any of the 3 classes can be correct for the given question asked.
Any help would be great.
you can do this with MultiLabelBinarizer:
from sklearn.preprocessing import MultiLabelBinarizer
lb = MultiLabelBinarizer()
label = lb.fit_transform(label)
you can than pass the labels to the fit function with 'categorical_crossentropy' loss.
if you want to do it with keras:
from keras.utils import to_categorical
import numpy as np
unique_labels, new_labels = np.unique(label, return_inverse=True)
to_categorical(new_labels, num_classes=None)

Categories