Tensorflow: Different results with the same random seed - python

I'm running a reinforcement learning program in a gym environment(BipedalWalker-v2) implemented in tensorflow. I've set the random seed of the environment, tensorflow and numpy manually as follows
os.environ['PYTHONHASHSEED']=str(42)
random.seed(42)
np.random.seed(42)
tf.set_random_seed(42)
env = gym.make('BipedalWalker-v2')
env.seed(0)
config = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
# run the graph with sess
However, I get different results every time I run my program (without changing any code). Why are the results not consistent and what should I do if I want to obtain the same result?
Update:
The only places that I can think of may introduce randomness (other than the neural networks) are
I use tf.truncated_normal to generate random noise epsilon so as to implement noisy layer
I use np.random.uniform to randomly select samples from replay buffer
I also spot that the scores I get are pretty consistent at the first 10 episodes, but then begin to differ. Other things such as losses also show a similar trend but are not the same in numeric.
Update 2
I've also set "PYTHONHASHSEED" and use single-thread CPU as #jaypops96 described, but still cannot reproduce the result. Code has been updated in the above code block

I suggest checking whether your TensorFlow graph contains nondeterministic operations. For example, reduce_sum before TensorFlow 1.2 was one such operation. These operations are nondeterministic because floating-point addition and multiplication are nonassociative (the order in which floating-point numbers are added or multiplied affects the result) and because such operations don't guarantee their inputs are added or multiplied in the same order every time. See also this question.
EDIT (Sep. 20, 2020): The GitHub repository framework-determinism has more information about sources of nondeterminism in machine learning frameworks, particularly TensorFlow.

It seems that tensorflow neural networks introduce randomness during training that isn't controlled by a numpy random seed. The randomness appears to possibly come from python hash operations and parallelized operations executing in non-controlled ordering, at the very least.
I had success getting 100% reproducibility using a keras-tensorflow NN, by following the setup steps in this response:
How to get reproducible results in keras
specifically, I used the formulation proposed by #Poete Maudit in that link.
They key was to set random seed values UP FRONT, for numpy, python, and tensorflow, then also to make tensorflow run on single-thread CPU in a specially-configured session.
Here's the code i used, updated very slightly from the link i posted.
print('Running in 1-thread CPU mode for fully reproducible results training a CNN and generating numpy randomness. This mode may be slow...')
# Seed value
# Apparently you may use different seed values at each stage
seed_value= 1
# 1. Set `PYTHONHASHSEED` environment variable at a fixed value
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
seed_value += 1
# 2. Set `python` built-in pseudo-random generator at a fixed value
import random
random.seed(seed_value)
seed_value += 1
# 3. Set `numpy` pseudo-random generator at a fixed value
import numpy as np
np.random.seed(seed_value)
seed_value += 1
# 4. Set `tensorflow` pseudo-random generator at a fixed value
import tensorflow as tf
tf.set_random_seed(seed_value)
# 5. Configure a new global `tensorflow` session
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
tf.keras.backend.set_session(sess)
#rest of code...

Maybe you can try to set the number of parallelism threads to 1. I have the same problem: the loss became different to the seventh decimal place start from the second episode. It fixed when I set
tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)

Related

How to create reproducible RandomForestClassifier in Python using jobs=-1?

I've read at https://towardsdatascience.com/random-seeds-and-reproducibility-933da79446e3 to create reproducible machine learning models in Python, you need to set the random seed and pin the package versions.
I would like to be able to save models after training, that is, e.g. using pickle.dump(), load them up again and then get the same results.
At https://docs.python.org/3/library/random.html#notes-on-reproducibility it says:
"Sometimes it is useful to be able to reproduce the sequences given by a pseudo-random number generator. By re-using a seed value, the same sequence should be reproducible from run to run as long as multiple threads are not running."
I'm using a RandomForestClassifier with jobs=-1 so I'm wondering whether I need to do more or whether this is handled internally already.
For the random seed now I have:
os.environ['PYTHONHASHSEED'] = str(42)
random.seed(42)
np.random.seed(42)
And for the classifier I'm setting the random state:
rf = RandomForestClassifier(random_state=42)
According to the documentation, you must also set the random_state parameter in the RandomForestClassifier:
random_state: int, RandomState instance or None, default=None
Controls both the randomness of the bootstrapping of the samples used when
building trees (if bootstrap=True) and the sampling of the features to
consider when looking for the best split at each node (if max_features
< n_features). See Glossary for details.
For example:
from sklearn.ensemble import RandomForestClassifier
SEED = 42
clf = RandomForestClassifier(random_state = SEED)
CLARIFICATIONS:
In order for the experiment to be fully reproducible, all steps in the preparation of the dataset must be checked (e.g. train and test splits) even with fixed seed. np.random.seed does not guarantee a fixed random state for sklearn. We need to set random_state parameter corresponding to each sklearn function to ensure repeatability.
It is also sufficient to set the random_state in multithreading. Make sure you use the latest version of sklearn if possible to avoid possible bugs on earlier versions.

How to get reproducible results with Keras/Tensorflow on different computers?

We are working on a deep learning project with a friend, and we would like to train the same model on our 2 different computers and get the same results. Is there a way we can do that?
Tensorflow version is 2.5.0
What we tried so far:
At the start of our code, in the first line of the main function, we set the Python Hash Seed to 0.
os.environ['PYTHONHASHSEED'] = str(0)
After that, in the training function we set the seeds appropriately.
random.seed(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
On one computer, two runs with the same seed gives us the same result, however on the 2 different computers, the results are different.
For example, with seed = 1, on my pc, the loss is around 0.09, but on my friend's it is lower than 0.06.
Why is that, and how can we solve it?

Keras - Hyper Tuning the initial state of the model

I've written an LSTM model that predicts the sequential data.
def get_model(config, num_features, output_size):
opt = Adam(learning_rate=get_deep(config, 'hp.learning_rate'), beta_1=get_deep(config, 'hp.beta_1'))
inputs = Input(shape=[None, num_features], dtype=tf.float32, ragged=True)
layers = LSTM(get_deep(config, 'hp.lstm_neurons'), activation=get_deep(config, 'hp.lstm_activation'))(
inputs.to_tensor(), mask=tf.sequence_mask(inputs.row_lengths()))
layers = BatchNormalization()(layers)
if 'dropout_rate' in config['hp']:
layers = Dropout(get_deep(config, 'hp.dropout_rate'))(layers)
for layer in get_deep(config, 'hp.dense_layers'):
layers = Dense(layer['neurons'], activation=layer['activation'])(layers)
layers = BatchNormalization()(layers)
if 'dropout_rate' in layer:
layers = Dropout(layer['dropout_rate'])(layers)
layers = Dense(output_size, activation='sigmoid')(layers)
model = Model(inputs, layers)
model.compile(loss='mse', optimizer=opt, metrics=['mse'])
model.summary()
return model
I've tuned some of the layer's params using AWS SageMaker. While validating the model I've run a model with a specific configuration several times. Most of the time the results are similar, however, one run was much better than others, which led me to think that the initial state of the model is probably crucial in order to get the best performance.
As suggested in this video, weight initialization can provide some performance boost.
I've googled around and found layer weight initializers, but I'm not sure what ranges should I tune.
Update:
As suggested in some of the comments / answers I'm using a fixed seed to "lock" the model results:
# Set `python` built-in pseudo-random generator at a fixed value
random.seed(seed_value)
# Set `numpy` pseudo-random generator at a fixed value
np.random.seed(seed_value)
# Set `tensorflow` pseudo-random generator at a fixed value
tf.random.set_seed(seed_value)
The results replicate for each new train, however, different seeds can produce much better results than others. So how do I find/tune the best seed?
... which led me to think that the initial state of the model is probably crucial in order to get the best performance.
..... As suggested in this video, weight initialization can provide some performance boost. I've googled around and found layer weight initializers, but I'm not sure what ranges should I tune.
Firstly, in that video, apart from the state or weights initializer, all the other factors such as learning rate, schedule, optimizer, batch size, loss function, model depth, etc are something you should play with them to find the best set (we will talk about the role of seed later). Normally, we don't need to tune the default weight or state initializer as those are currently the best; and as usual, this state initialization is a research problem.
Secondly, in keras, the default weight initializer for Convolution, Dense and RNN-GRU/LSTM is glorot_uniform, also known as Xavier uniform initializer. And the default bias initializer is zeros. If you follow the source code of LSTM (in your case), you would find them. About it, according to the doc
Draws samples from a uniform distribution within [-limit, limit], where limit = sqrt(6 / (fan_in + fan_out)) (fan_in is the number of input units in the weight tensor and fan_out is the number of output units).
Now, you may already notice that this initializer is inherited from the VarianceScaling; and same as GlorotUniform, others like GlorotNormal, LecunNormal, LecunUniform, HeNormal, HeUniform are also inheriting it. Regarding the VarianceScaling, here is listed the supported parameter. For example, technically, the following two are the same.
# in case if you want to try various initializer -
# use VarianceScaling by passing proper parameter.
# ie. tf.keras.layers.LSTM(..., kernel_initializer=initializer)
# bur recommended to stick with glorot_uniform (default)
initializer = tf.keras.initializers.VarianceScaling(scale=1.,
mode='fan_avg', seed=101,
distribution='uniform')
print(initializer(shape=(2, 2)))
initializer = tf.keras.initializers.GlorotUniform(seed=101)
print(initializer(shape=(2, 2)))
tf.Tensor(
[[-1.0027379 1.0746485]
[-1.2234 -1.1489409]], shape=(2, 2), dtype=float32)
tf.Tensor(
[[-1.0027379 1.0746485]
[-1.2234 -1.1489409]], shape=(2, 2), dtype=float32)
In short, you can play with tf.keras.initializers.VarianceScaling (at the bottom of the page). Additionally, you can make your own initializer by defining a callable function or by subclassing the Initializer class. For example:
def conv_kernel_initializer(shape, dtype=None):
kernel_height, kernel_width, _, out_filters = shape
fan_out = int(kernel_height * kernel_width * out_filters)
return tf.random.normal(
shape, mean=0.0, stddev=np.sqrt(2.0 / fan_out), dtype=dtype)
def dense_kernel_initializer(shape, dtype=None):
init_range = 1.0 / np.sqrt(shape[1])
return tf.random.uniform(shape, -init_range, init_range, dtype=dtype)
Here is one good article about initializing the weights, you may enjoy reading. Butt again, better to go with default ones.
Thirdly, for setting different seed values and different sets of hyper-parameter, etc, I better leave one of my old answers here, mostly the first diagram probably come in handy to your experiments. One of an approach that I follow is to keep my seed same (let's say for first 5 experiments) and change another factor and log the results. And after 5 iterations, hopefully, we would get some best set and approach further.
Update
Find/Tune Seed. Before searching the method to find the best seed, one must understand that seed is not a hyper-parameter that needs to be tuned with other hyperparameters such as learning rate, scheduler, optimizer, etc.
Here are one scenario, let's say you split the data randomly into two parts with seed 42: train set (70%) and test set (30%) and after training on the train set, you evaluate on the test set of your model and received score 80. Then you change your seed to 101, and again do the same but now you got score 50. Now, this doens't mean picking seed 42 is better; but it simply means your model is unstable and most like won't do well on the unseen data. This is actually a well-known issue if someone randomly split their data set for training and testing. Why it happens? Because, when you split the data randomly, it's possible that there would be a mismatch in class-distribution. Please, check the following two very related discussion on this:
Is random seed a hyper-parameter to tune in training deep neural network?
How to choose the random seed?
I dont think there is a "one shoe fits all" solution to this issue. The initial weights heavily depend on the kind of problem at hand and the data that we are using to solve that problem. All we can do is point you towards a good resource from where you can try to see which of the approaches mentioned fits your problem.
The following article is a good resource that not only provides you with a detailed understanding of how and why to initialize weights but also points towards peer reviewed research that can help build an academic understanding.
maybe you search for exponential decay learning rate.
let me explain
for example you first epoch has sometimes a loss of 3000, 4000 , sometimes just 500.
if you run a model often, you probably recognize a "real barrier", where you dont say "thats because of the initial state" anymore.
you want to go fast there, but dont keep the bad side effects of high learning rate (e.g. 1E-3) , you more want 1E-5.
there the exponential decay come in place.
call an instance of myLr=tf.train.exponential_decay(...) and pass it instead of the numerical learning rate parameter to your optimizer
for example Adam(myLr)
Indeed, the initial state of the model is crucial in order to get the best performance. Deep Learning works by optimizing a non-convex loss function in order to find the best local minima.
The initial weights will define the starting location of the optimization. As defined in the picture below. The starting point is defined by the initial weights and training the model will make it reach the local minima. As you see there is a starting weight configuration that allows reaching the global minima.
It is sometimes possible to have better weights initialization with Transfer Learning which is reusing the weights of a trained model on a downstream task. (For example VGG-16 in image classification, or NLP with BERT).
In your case, You should not try to finetune the weight initialization as this is meant to be random. Changing the architecture of your neural network, or its hyperparameter will certainly lead to better performance improvement.
Short answer: you can neither efficiently nor effectively tune the seed for a pseudo-random number generator. It is not only infeasible due to the extremely large search space, but also impractical for many other reasons, including the fact that pseudo-random number generator implementations change from time to time so you would need to start over every time that happened.
If, for some reason, you are hell-bent on discovering this for yourself, I recommend using NumPy's default_rng object to be the single source of all pseudo-randomness in your algorithm. Then, based on a single seed, you can produce other seeds deterministically for use with, say, tf.random.set_seed.

Fluctuations in neural netwok inference time in PyTorch

I have a neural network and I would like to test its inference time by running it on a CPU. It's important that the time would be measured as accurately as possible.
I use the following code to get the inference time of a network:
with torch.no_grad():
for image, _ in testloader:
start = time.time()
net(image)
total_time = time.time() - start
The problem is, that even if I pass the same image through the network (although in this case "image" is really a batch of images) and do that many times over, the inference times will fluctuate with a difference between a minimum and maximum times reaching 30ms or even higher.
Also, before I ran the tests, I had made sure that the OS would interfere as less as possible with the process - closed all applications.
I want to understand where this fluctuations come from exactly. It is in my understanding, that if the same image is passed through the network, it should take the same time to classify it no matter how many tests I make (at least theoretically speaking).
So what causes these fluctuations? Is there a way to eliminate them?
If you run standart torch model, torch creates graph of execution and performs some caching; this makes first run longer than others. Also I notice in several cases small difference in gc behaviour during iterations.
You can (partially) avoid these problems by converting model to TorchScript model.
Also you should check your dataloader and fix any randomness.

Why is my model giving different result each time I train it?

My question is why when I train the same algorithm twice, it gives different results each time I train it?
Is it normal or there might be a problem in data or code?
The algorithm is deep deterministic policy gradient.
It's absolutely normal. There is no problem with either data or code.
The algorithm may be initialized to a random state, such as the initial weights in an artificial neural network.
Try setting numpy seed for result reproducibility as below:
import numpy as np
np.random.seed(42)
Learn more about this from here.
When you initialize weights to your model, they are often initialized randomly by whatever you use, most likely np.random.rand(), and therefore yields different results every time.
If you do not want to randomize weights, use np.random.seed(10) to always get the same results. If using any other library, I'm sure there are equal commands.
Edit: i saw you used tensorflow, in that case:
tf.random.set_random_seed(10)

Categories