I want to make a Seq2Seq model for reconstruction purpose. I want a model trained to reconstruct the normal time-series and it is assumed that such a model would do badly to reconstruct the anomalous time-series having not seen them during training.
I have some gaps in my code and also in the understanding. I took this as an orientation and did so far:
traindata: input_data.shape(1000,60,1) and target_data.shape(1000,50,1) with target data being the same training data only in reversed order as sugested in the paper here.
for inference: I want to predict another time series data with the trained model having the shape (3000,60,1). T Now 2 points are open: how do I specify the input data for my training model and how do I build the inference part with the stop condition ?
Please correct any mistakes.
from keras.models import Model
from keras.layers import Input
from keras.layers import LSTM
from keras.layers import Dense
num_encoder_tokens = 1#number of features
num_decoder_tokens = 1#number of features
encoder_seq_length = None
decoder_seq_length = None
batch_size = 50
epochs = 40
# same data for training
input_seqs=()#shape (1000,60,1) with sliding windows
target_seqs=()#shape(1000,60,1) with sliding windows but reversed
x= #what has x to be ?
#data for inference
# how do I specify the input data for my other time series ?
# Define training model
encoder_inputs = Input(shape=(encoder_seq_length,
num_encoder_tokens))
encoder = LSTM(128, return_state=True, return_sequences=True)
encoder_outputs = encoder(encoder_inputs)
_, encoder_states = encoder_outputs[0], encoder_outputs[1:]
decoder_inputs = Input(shape=(decoder_seq_length,
num_decoder_tokens))
decoder = LSTM(128, return_sequences=True)
decoder_outputs = decoder(decoder_inputs, initial_state=encoder_states)
decoder_outputs = TimeDistributed(
Dense(num_decoder_tokens, activation='tanh'))(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# Training
model.compile(optimizer='adam', loss='mse')
model.fit([input_seqs,x], target_seqs,
batch_size=batch_size, epochs=epochs)
# Define sampling models for inference
encoder_model = Model(encoder_inputs, encoder_states)
decoder_state_input_h = Input(shape=(100,))
decoder_state_input_c = Input(shape=(100,))
decoder_states = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs = decoder(decoder_inputs,
initial_state=decoder_states)
decoder_model = Model([decoder_inputs] + decoder_states,
decoder_outputs)
# Sampling loop for a batch of sequences
states_values = encoder_model.predict(input_seqs)
stop_condition = False
while not stop_condition:
output_tokens = decoder_model.predict([target_seqs] + states_values)
#what else do I need to include here ?
break
def predict_sequence(infenc, infdec, source, n_steps, cardinality):
# encode
state = infenc.predict(source)
# start of sequence input
target_seq = array([0.0 for _ in range(cardinality)]).reshape(1, 1, cardinality)
# collect predictions
output = list()
for t in range(n_steps):
# predict next char
yhat, h, c = infdec.predict([target_seq] + state)
# store prediction
output.append(yhat[0,0,:])
# update state
state = [h, c]
# update target sequence
target_seq = yhat
return array(output)
You can see that the output from every timestep is fed back to the LSTM cell externally.
You can refer the blog and find how it is done during inference.
https://machinelearningmastery.com/develop-encoder-decoder-model-sequence-sequence-prediction-keras/
During training, we give the data in a one shot manner. I think you understand that part.
But during the inference time, we can't do like that. We have to give the data at every time step and then return the cell states, hidden states and the loop should continue till the last word is generated
Related
Google's Recommendation System course include a section on Retrieval, where it is mentioned that recommendations can be made by checking similarity between user embedding Ψ(X) and movie embedding Vj.
How to get particular user embedding through Ψ(X)? Going through below code (which can be found here), output in create_network() should be Ψ(X), so how would we extract embedding of particular user to create user recommendations?
def build_softmax_model(rated_movies, embedding_cols, hidden_dims):
"""Builds a Softmax model for MovieLens.
Args:
rated_movies: DataFrame of traing examples.
embedding_cols: A dictionary mapping feature names (string) to embedding
column objects. This will be used in tf.feature_column.input_layer() to
create the input layer.
hidden_dims: int list of the dimensions of the hidden layers.
Returns:
A CFModel object.
"""
def create_network(features):
"""Maps input features dictionary to user embeddings.
Args:
features: A dictionary of input string tensors.
Returns:
outputs: A tensor of shape [batch_size, embedding_dim].
"""
# Create a bag-of-words embedding for each sparse feature.
inputs = tf.feature_column.input_layer(features, embedding_cols)
# Hidden layers.
input_dim = inputs.shape[1].value
for i, output_dim in enumerate(hidden_dims):
w = tf.get_variable(
"hidden%d_w_" % i, shape=[input_dim, output_dim],
initializer=tf.truncated_normal_initializer(
stddev=1./np.sqrt(output_dim))) / 10.
outputs = tf.matmul(inputs, w)
input_dim = output_dim
inputs = outputs
return outputs
train_rated_movies, test_rated_movies = split_dataframe(rated_movies)
train_batch = make_batch(train_rated_movies, 200)
test_batch = make_batch(test_rated_movies, 100)
with tf.variable_scope("model", reuse=False):
# Train
train_user_embeddings = create_network(train_batch)
train_labels = select_random(train_batch["label"])
with tf.variable_scope("model", reuse=True):
# Test
test_user_embeddings = create_network(test_batch)
test_labels = select_random(test_batch["label"])
movie_embeddings = tf.get_variable(
"input_layer/movie_id_embedding/embedding_weights")
test_loss = softmax_loss(
test_user_embeddings, movie_embeddings, test_labels)
train_loss = softmax_loss(
train_user_embeddings, movie_embeddings, train_labels)
_, test_precision_at_10 = tf.metrics.precision_at_k(
labels=test_labels,
predictions=tf.matmul(test_user_embeddings, movie_embeddings, transpose_b=True),
k=10)
metrics = (
{"train_loss": train_loss, "test_loss": test_loss},
{"test_precision_at_10": test_precision_at_10}
)
embeddings = {"movie_id": movie_embeddings}
return CFModel(embeddings, train_loss, metrics)
CFModel is a helper class to train model using SGD.
If you pass the feature vector for a single user to the embedding model, its output Ψ(X) will be the query vector for that user, with shape [1, embedding_dim].
Good afternoon everyone. I started studying recursive neural networks not long ago, so I will be glad for any advice.
Today I tried to use bidirectional LSTM layers and ran into the problem of poor text generation.
Used my 400kBt dataset for testing.
<pre>
BUFFER_SIZE = 10000
BATCH_SIZE = 64
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
vocab_size = len(vocab)
embedding_dim = 512 #256
rnn_units = 512
def build_model( vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.LSTM(rnn_units,
return_sequences=True, stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(rnn_units, return_sequences=True, input_shape=expanded_input.shape)),
tf.keras.layers.Dense(vocab_size)
])
return model
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
model.summary()
</pre>
to define expanded_input.shape i used this code
<pre>
input_data = tf.random.normal((BATCH_SIZE, 32))
repeated_input = tf.reshape(tf.repeat(input = input_data, repeats = 5, axis=1), shape=(BATCH_SIZE, 5, 32))
expanded_input = tf.expand_dims(input=input_data, axis=1)
</pre>
for the text generation function, this part of the code was used
<pre>
def generate_text(model, num_generate, temperature, start_string):
input_eval = [char2idx[s] for s in start_string] # string to numbers (vectorizing)
input_eval = tf.expand_dims(input_eval, 0) # dimension expansion
text_generated = [] # Empty string to store our results
model.reset_states() # Clears the hidden states in the RNN
for i in range(num_generate): #Run a loop for number of characters to generate
predictions = model(input_eval) # prediction for single character
predictions = tf.squeeze(predictions, 0) # remove the batch dimension
predictions = predictions / temperature
predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy()
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
</pre>
As a result, the output is mostly similar to this
чЦОMm🙌"f
😩✌👐jБУ🔥р'!🏻i{⬇dё6😂👆nU52г
👉И🏻T➕И7😁y👐рг😬л🏻Хж<tCП⃣“Уи🔻п😕н—В👆И2⏩💪RТ😬Ют🙂Ё♂5jЩpКГлш-nu5Цf⏩
NёФЁУЁ"🔥n⬇4🙃r😬uy⏩😂ч⏩CALfЛФcЁё»йx😤М
I'm a beginner to this field and am stuck. I am following this tutorial (https://towardsdatascience.com/multi-label-multi-class-text-classification-with-bert-transformer-and-keras-c6355eccb63a) to build a multi-label classification using huggingface tranformers.
Following is the code I'm using to train my model.
# Name of the BERT model to use
model_name = 'bert-base-uncased'
# Max length of tokens
max_length = 100
PATH = 'uncased_L-12_H-768_A-12/'
# Load transformers config and set output_hidden_states to False
config = BertConfig.from_pretrained(PATH)
config.output_hidden_states = False
# Load BERT tokenizer
tokenizer = BertTokenizerFast.from_pretrained(PATH, local_files_only=True, config = config)
# tokenizer = BertTokenizer.from_pretrained(PATH, local_files_only=True, config = config)
# Load the Transformers BERT model
transformer_model = TFBertModel.from_pretrained(PATH, config = config,from_pt=True)
#######################################
### ------- Build the model ------- ###
# Load the MainLayer
bert = transformer_model.layers[0]
# Build your model input
input_ids = Input(shape=(None,), name='input_ids', dtype='int32')
# attention_mask = Input(shape=(max_length,), name='attention_mask', dtype='int32')
# inputs = {'input_ids': input_ids, 'attention_mask': attention_mask}
inputs = {'input_ids': input_ids}
# Load the Transformers BERT model as a layer in a Keras model
bert_model = bert(inputs)[1]
dropout = Dropout(config.hidden_dropout_prob, name='pooled_output')
pooled_output = dropout(bert_model, training=False)
# Then build your model output
issue = Dense(units=len(data.U_label.value_counts()), kernel_initializer=TruncatedNormal(stddev=config.initializer_range), name='issue')(pooled_output)
outputs = {'issue': issue}
# And combine it all in a model object
model = Model(inputs=inputs, outputs=outputs, name='BERT_MultiLabel_MultiClass')
# Take a look at the model
model.summary()
#######################################
### ------- Train the model ------- ###
# Set an optimizer
optimizer = Adam(
learning_rate=5e-05,
epsilon=1e-08,
decay=0.01,
clipnorm=1.0)
# Set loss and metrics
loss = {'issue': CategoricalCrossentropy(from_logits = True)}
# loss = {'issue': CategoricalCrossentropy()}
metric = {'issue': CategoricalAccuracy('accuracy')}
# Compile the model
model.compile(
optimizer = optimizer,
loss = loss,
metrics = metric)
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(data['U_H_Label'])
# Ready output data for the model
y_issue = to_categorical(le.transform(data['U_H_Label']))
# Tokenize the input (takes some time)
x = tokenizer(
text=data['Input_Data'].to_list(),
add_special_tokens=True,
max_length=max_length,
truncation=True,
padding=True,
return_tensors='tf',
return_token_type_ids = False,
return_attention_mask = True,
verbose = True)
# Fit the model
history = model.fit(
# x={'input_ids': x['input_ids'], 'attention_mask': x['attention_mask']},
x={'input_ids': x['input_ids']},
y={'issue': y_issue},
validation_split=0.2,
batch_size=64,
epochs=10)
When I use the model.predict() function, I think I get logit scores for each class, and would like to convert them to probability scores ranging from 0 to 1.
I have read in multiple blogs that a softmax function is what I have to use, but am not able to relate on where and how. If anyone could please tell me what line of code would be required, I'd be grateful!
Once you get the logit scores from model.predict(), then you can do as follows:
from torch.nn import functional as F
import torch
# convert logit score to torch array
torch_logits = torch.from_numpy(logit_score)
# get probabilities using softmax from logit score and convert it to numpy array
probabilities_scores = F.softmax(torch_logits, dim = -1).numpy()[0]
In official Keras seq2seq example (I'll include it at the bottom) they train the model with the fit function, but they don't even use that model anywhere in the decoding process to test the model on new data.
I am trying to train a seq2seq model just like this, and I have successfully trained it on Google Colab and downloaded the .h5 file and I load the model with the load_model function, but it doesn't seem to be working.
Why does the script need to train the model if it doesn't use the model variable anywhere after training? Extending from this, what's the process of using the script on new data? I'm currently just using the same script but leaving out the compile and fit functions and replacing them with load_model.
from __future__ import print_function
from keras.models import Model
from keras.layers import Input, LSTM, Dense
import numpy as np
batch_size = 64 # Batch size for training.
epochs = 100 # Number of epochs to train for.
latent_dim = 256 # Latent dimensionality of the encoding space.
num_samples = 10000 # Number of samples to train on.
# Path to the data txt file on disk.
data_path = 'fra-eng/fra.txt'
# Vectorize the data.
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
with open(data_path, 'r', encoding='utf-8') as f:
lines = f.read().split('\n')
for line in lines[: min(num_samples, len(lines) - 1)]:
input_text, target_text = line.split('\t')
# We use "tab" as the "start sequence" character
# for the targets, and "\n" as "end sequence" character.
target_text = '\t' + target_text + '\n'
input_texts.append(input_text)
target_texts.append(target_text)
for char in input_text:
if char not in input_characters:
input_characters.add(char)
for char in target_text:
if char not in target_characters:
target_characters.add(char)
input_characters = sorted(list(input_characters))
target_characters = sorted(list(target_characters))
num_encoder_tokens = len(input_characters)
num_decoder_tokens = len(target_characters)
max_encoder_seq_length = max([len(txt) for txt in input_texts])
max_decoder_seq_length = max([len(txt) for txt in target_texts])
print('Number of samples:', len(input_texts))
print('Number of unique input tokens:', num_encoder_tokens)
print('Number of unique output tokens:', num_decoder_tokens)
print('Max sequence length for inputs:', max_encoder_seq_length)
print('Max sequence length for outputs:', max_decoder_seq_length)
input_token_index = dict(
[(char, i) for i, char in enumerate(input_characters)])
target_token_index = dict(
[(char, i) for i, char in enumerate(target_characters)])
encoder_input_data = np.zeros(
(len(input_texts), max_encoder_seq_length, num_encoder_tokens),
dtype='float32')
decoder_input_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
decoder_target_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.
for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep
decoder_input_data[i, t, target_token_index[char]] = 1.
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.
# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# Run training
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.2)
# Save model
model.save('s2s.h5')
# Next: inference mode (sampling).
# Here's the drill:
# 1) encode input and retrieve initial decoder state
# 2) run one step of decoder with this initial state
# and a "start of sequence" token as target.
# Output will be the next target token
# 3) Repeat with the current target token and current states
# Define sampling models
encoder_model = Model(encoder_inputs, encoder_states)
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(
decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = dict(
(i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict(
(i, char) for char, i in target_token_index.items())
def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1, 1, num_decoder_tokens))
# Populate the first character of target sequence with the start character.
target_seq[0, 0, target_token_index['\t']] = 1.
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, h, c = decoder_model.predict(
[target_seq] + states_value)
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += sampled_char
# Exit condition: either hit max length
# or find stop character.
if (sampled_char == '\n' or
len(decoded_sentence) > max_decoder_seq_length):
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1, 1, num_decoder_tokens))
target_seq[0, 0, sampled_token_index] = 1.
# Update states
states_value = [h, c]
return decoded_sentence
for seq_index in range(100):
# Take one sequence (part of the training set)
# for trying out decoding.
input_seq = encoder_input_data[seq_index: seq_index + 1]
decoded_sentence = decode_sequence(input_seq)
print('-')
print('Input sentence:', input_texts[seq_index])
print('Decoded sentence:', decoded_sentence)
Of course it is using that model in inference time, however not all of it (i.e. model). Instead some of its layers are used in the inference model:
decoder_outputs, state_h, state_c = decoder_lstm(
decoder_inputs, initial_state=decoder_states_inputs)
decoder_outputs = decoder_dense(decoder_outputs)
Both decoder_lstm and decoder_dense belong to the training model and their weights have been trained. Now we can reuse these trained layers in another model (i.e. inference_model) to perform inference.
Update: If you have already trained and saved the model and would like to perform inference, you can first load the trained model and then use model.layers to access the layers and create the inference model. Actually, there is an accompanying script in Keras examples repository that exactly does this.
Model :
sequence_input = Input(shape=(MAX_SENT_LENGTH,), dtype='int32')
words = embedding_layer(sequence_input)
h_words = Bidirectional(GRU(200, return_sequences=True,dropout=0.2,recurrent_dropout=0.2))(words)
sentence = Attention()(h_words) #with return true
#sentence = Dropout(0.2)(sentence)
sent_encoder = Model(sequence_input, sentence[0])
print(sent_encoder.summary())
document_input = Input(shape=(None, MAX_SENT_LENGTH), dtype='int32')
document_enc = TimeDistributed(sent_encoder)(document_input)
h_sentences = Bidirectional(GRU(100, return_sequences=True))(document_enc)
preds = Dense(7, activation='softmax')(h_sentences)
model = Model(document_input, preds)
Attention layer used:
https://gist.github.com/cbaziotis/6428df359af27d58078ca5ed9792bd6d
with return_attention=True
How can I visualise attention weights for a new input once the model is trained.
What I am trying:
get_3rd_layer_output = K.function([model.layers[0].input,K.learning_phase()],
[model.layers[1].layer.layers[3].output])
and passing a new input but it is giving me error.
Possible reasons:
model.layers() only gives me last layers. I want ot get weights from the Timedistributed part.
You can use the following to display all the layers in your model:
print(model.layers)
Once you know what index number is your Time Distributed layer, say, 3, then use the following to get the config and the layer weights.
g = model_name.layers[3].get_config()
h = model_name.layers[3].get_weights()
print(g)
print(h)