Neural network for iris data set - python

Background: I'm writing in Python a three-layer neural network using mini-batch stochastic gradient descent specifically designed to identify between three classes of iris plants from the famous iris data set. The input layer has four neurons, one for each feature in the data. The hidden layer has 3 neurons (but the code allows variations in hidden layer neuron numbers) and the output layer has three neurons (one for each species). All neurons use sigmoid activation functions.
Problem: The loss (mean-squared error) generally decreases over time, however the accuracy (usually below 55.55% or even 33.33%) is stagnant. I've tried experimenting with different epoch iteration numbers and learning rates, but nothing worked. Interestingly, more often than not, the outputs for the algorithm remain fixed no matter what the input values are. I'm fairly certain of my math, since the loss seems to be decreasing as the number of epochs increases.
To replicate problem: Just run the Python code and observe the LEARNING_RESULTS.txt file. (Make sure iris.txt file in the repo is in same directory)
Question: How can I improve performance for this neural network?
Link to GitHub repo: https://github.com/kwonkyo/neural-networks
Thanks!
UPDATE: Problem solved. I was adding a constant value (numerical sum of the sum of the mini-batch matrices) to the weight and bias matrices instead of the sum of the mini-batch gradient matrices. Updated code has been pushed to github.

Related

Neural Network doesn't match the expected output

I am trying to build a Neural Network from scratch, using only numpy. I have the following code and functions. However, the output after the training is not matching the expected output that i have (using XOR as an example). I think one of my functions is not correct but cannot figure out the mistake. The output I get is, for example: [[0.73105858], [0.53336314],[0.79343002],[0.5786911 ]], which is not close to the expected output [0,0,0,1]
I don't so any issues with your code, but here are some thing you should have in mind:
Your neural network is trained for 2 iterations, with a learning rate of 0.01. This means that your network is only updated 2 times with a small rate of improvement resulting in an undertrained neural network. Also, your always using a tensor of the size 4*4 for input, meaning that the neural network is only updated for the average of all samples, hence the result that just seems like an average.
For improvement, my suggestion would be to increase the number of iterations and also increase the number of samples for each iterations, also making sure that each iteration has more than one update. Still, i believe that you won't get 100% accurate results, since you are only using one linear layer for XOR, which can't be solved with just one linear system. You could consider adding another layer for better results.

How do neural network models learn different weights for each of the neuron in a single layer?

I had had an overwiew of how neural networks work and have come up with some interconnected questions, on which I am not able to find an answer.
Considering one-hidden-layer feedforward neural network: if the function for each of the hidden-layer neurons is the same
a1 = relu (w1x1+w2x2), a2=relu(w3x1+w4x2), ... 
How do we make the model learn different values of weights?
I do undestand the point of manually-established connections between neurons. As shown on the picture Manually established connections between neurons, that way we define the possible functions of functions (i.e., house size and bedrooms number taken together might represent a possible family size which the house would accomodate). But the fully-connected network doesn't make sense to me.
I get the point that a fully-connected neural network should somehow automatically define, which functions of functions make sense, but how does it do it?
Not being able to answer this question, I don't also understand why should increasing the number of neurons increase the accuracy of model prediction?
How do we make the model learn different values of weights?
By initializing the parameters before training starts. In case of a fully connected neural network otherwise we would have the same update step on each parameter - that is where your confusion is coming from. Initialization, either randomly or more sophisticated (e.g. Glorot) solves this.
Why should increasing the number of neurons increase the accuracy of the model prediction?
This is only partially true, increasing the number of neurons should improve your training accuracy (it is a different game for your validation and test performance). By adding units your model is able to store additional information or incorporate outliers into your network, and hence improve the accuracy of the prediction. Think of a 2D problem (predicting house prizes per sqm over sqm of some property). With two parameters you can fit a line, with three a curve and so on, the more parameters the more complex your curve can get and fit through each of your training points.
Great next step for a deep dive - Karpathy's lecture on Computer Vision at Stanford.

Python Keras LSTM learning converges too fast on high loss

This is more of a deep learning conceptual problem, and if this is not the right platform I'll take it elsewhere.
I'm trying to use a Keras LSTM sequential model to learn sequences of text and map them to a numeric value (a regression problem).
The thing is, the learning always converges too fast on high loss (both training and testing). I've tried all possible hyperparameters, and I have a feeling it's a local minima issue that causes the model's high bias.
My questions are basically :
How to initialize weights and bias given this problem?
Which optimizer to use?
How deep I should extend the network (I'm afraid that if I use a very deep network, the training time will be unbearable and the model variance will grow)
Should I add more training data?
Input and output are normalized with minmax.
I am using SGD with momentum, currently 3 LSTM layers (126,256,128) and 2 dense layers (200 and 1 output neuron)
I have printed the weights after few epochs and noticed that many weights
are zero and the rest are basically have the value of 1 (or very close to it).
Here are some plots from tensorboard :
Faster convergence with a very high loss could possibly mean you are facing an exploding gradients problem. Try to use a much lower learning rate like 1e-5 or 1e-6. You can also try techniques like gradient clipping to limit your gradients in case of high learning rates.
Answer 1
Another reason could be initialization of weights, try the below 3 methods:
Method described in this paper https://arxiv.org/abs/1502.01852
Xavier initialization
Random initialization
For many cases 1st initialization method works the best.
Answer 2
You can try different optimizers like
Momentum optimizer
SGD or Gradient descent
Adam optimizer
The choice of your optimizer should be based on the choice of your loss function. For example: for a logistic regression problem with MSE as a loss function, gradient based optimizers will not converge.
Answer 3
How deep or wide your network should be is again fully dependent on which type of network you are using and what the problem is.
As you said you are using a sequential model using LSTM, to learn sequence on text. No doubt your choice of model is good for this problem you can also try 4-5 LSTMs.
Answer 4
If your gradients are going either 0 or infinite, it is called vanishing gradients or it simply means early convergence, try gradient clipping with proper learning rate and the first weight initialization technique.
I am sure this will definitely solve your problem.
Consider reducing your batch_size.
With large batch_size, it could be that your gradient at some point couldn't find any more variation in your data's stochasticity and for that reason it convergences earlier.

Tensorflow neural network loss value NaN

I'm trying to build a simple multilayer perceptron model on a large data set but I'm getting the loss value as nan. The weird thing is: after the first training step, the loss value is not nan and is about 46 (which is oddly low. when i run a logistic regression model, the first loss value is about ~3600). But then, right after that the loss value is constantly nan. I used tf.print to try and debug it as well.
The goal of the model is to predict ~4500 different classes - so it's a classification problem. When using tf.print, I see that after the first training step (or feed forward through MLP), the predictions coming out from the last fully connected layer seem right (all varying numbers between 1 and 4500). But then, after that the outputs from the last fully connected layer go to either all 0's or some other constant number (0 0 0 0 0).
For some information about my model:
3 layer model. all fully connected layers.
batch size of 1000
learning rate of .001 (i also tried .1 and .01 but nothing changed)
using CrossEntropyLoss (i did add an epsilon value to prevent log0)
using AdamOptimizer
learning rate decay is .95
The exact code for the model is below: (I'm using the TF-Slim library)
input_layer = slim.fully_connected(model_input, 5000, activation_fn=tf.nn.relu)
hidden_layer = slim.fully_connected(input_layer, 5000, activation_fn=tf.nn.relu)
output = slim.fully_connected(hidden_layer, vocab_size, activation_fn=tf.nn.relu)
output = tf.Print(output, [tf.argmax(output, 1)], 'out = ', summarize = 20, first_n = 10)
return {"predictions": output}
Any help would be greatly appreciated! Thank you so much!
Two (possibly more) reasons why it doesn't work:
You skipped or inappropriately applied feature scaling of your
inputs and outputs. Consequently, data may be difficult to handle
for Tensorflow.
Using ReLu, which is a discontinuous function, may raise issues. Try using other activation functions, such as tanh or sigmoid.
For some reasons, your training process has diverged, and you may have infinite values in your weights, wich gives NaN losses. The reasons can be many, try changing your training parameters (use smaller batchs for test).
Also, using a relu for the last output in a classifier is not the usual method, try using a sigmoid.
From my understanding Relu doesn't put a cap on the upper bound for Neural Networks so its more likely to deconverge depending upon its implementation.
Try switching all the activation functions to tanh or sigmoid. Relu is generally used for convolution in cnns.
Its also difficult to determine if your deconverging due to cross entropy as we don't know how you effected it with your epsilon value. Try just using the residual its much simpler but still effective.
Also a 5000-5000-4500 neural network is huge. Its unlikely you actually need a network that large.

Replicating results with different artificial neural network frameworks (ffnet, tensorflow)

I'm trying to model a technical process (a number of nonlinear equations) with artificial neural networks. The function has a number of inputs and a number of outputs (e.g. 50 inputs, 150 outputs - all floats).
I have tried the python library ffnet (wrapper for a fortran library) with great success. The errors for a certain dataset are well below 0.2%.
It is using a fully connected graph and these additional parameters.
Basic assumptions and limitations:
Network has feed-forward architecture.
Input units have identity activation function, all other units have sigmoid activation function.
Provided data are automatically normalized, both input and output, with a linear mapping to the range (0.15, 0.85). Each input and output is treated separately (i.e. linear map is unique for each input and output).
Function minimized during training is a sum of squared errors of each output for each training pattern.
I am using one input layer, one hidden layer (size: 2/3 of input vector + size of output vector) and an output layer. I'm using the scipy conjugate gradient optimizer.
The downside of ffnet is the long training time and the lack of functionality to use GPUs. Therefore i want to switch to a different framework and have chosen keras with TensorFlow as the backend.
I have tried to model the previous configuration:
model = Sequential()
model.add(Dense(n_hidden, input_dim=n_in))
model.add(BatchNormalization())
model.add(Dense(n_hidden))
model.add(Activation('sigmoid'))
model.add(Dense(n_out))
model.add(Activation('sigmoid'))
model.summary()
model.compile(loss='mean_squared_error',
optimizer='Adamax',
metrics=['accuracy'])
However the results are far worse, the error is up to 0.5% with a few thousand (!) epochs of training. The ffnet training was automatically canceled at 292 epochs. Furthermore the differences between the network response and the validation target are not centered around 0, but mostly negative.
I have tried all optimizers and different loss functions. I have also skipped the BatchNormalization and normalized the data manually in the same way that ffnet does it. Nothing helps.
Does anyone have a suggestion to obtain better results with keras?
I understand you are trying to re-train the same architecture from scratch, with a different library. The first fundamental issue to keep in mind here is that neural nets are not necessarily reproducible, when weights are initialized randomly.
For example, here is the default constructor parameter for Dense in Keras:
init='glorot_uniform'
But even before trying to evaluate the convergence of Keras optimizations, I would recommend trying to port the weights for which you got good results, from ffnet, into your Keras model. You can do so either with the kwarg Dense(..., weights=) of each layer, or globally at the end model.set_weights(...)
Using the same weights must yield the exact same result between the two libs. Unless you run into some floating point rounding issues. I believe that as long as porting the weights is not consistent, working on the optimization is unlikely to help.

Categories