I am currently working on some project related to machine learning.
I extracted some features from the object.
So I train and test that features with NB, SVM and other classification algorithms and got result about 70 to 80 %
When I train the same features with neural networks using nolearn.dbn and then test it I got about 25% correctly classified. I had 2 hidden layers.
I still don't understand what is wrong with neural networks.
I hope to have some help.
Thanks
Try increasing the number of hidden units and the learning rate. The power of neural networks comes from the hidden layers. Depending on the size of your dataset, the number of hidden layers can go upto a few thousands. Also, please elaborate on the kind, and number of features you're using. If the feature set is small, you're better off using SVMs and RandomForests instead of neural networks.
Related
My graduation project is to use transfer learning on a CNN model that can diagnose Covid-19 from Chest X-ray images. After spending days Fine tuning the hyper parameters such as the number of fully connected layers, the number of nodes in the layers, the learning rate, and the drop rate using Keras tuner library with Bayesian Optimizer, I got some very good results, A test accuracy of 98% for multi class classification and a 99% for binary class classification. However, i froze all the layers in the original base model. I only fine tuned the last Fully connected layers after exhaustive hyper parameter optimization. Most articles and papers out there say that they fine the fully connected layers as well as some of the convolutional layers. Am i doing something wrong? I am afraid that this is too good to be true.
My data set is not that big, only 7000 images taken from the Kaggle Covid-19 competition.
I used image enhancement techniques such as N-CLAHE on the images before the training and the classification which improved the accuracy significantly compared to not enhancing the images.
I did the same for multiple State of art models, such as VGG-16 and ResNet50, and they all gave me superb results.
If you mean by "only fine tuned the last Fully connected layers" then NO, you did not.
You can choose to fine-tune any layer of your choice but most importantly the final layers of the model, which is what you did, so you're good to go.
I am new to Python and I need to solve classification problem in finance with neural net. There are many factor variables in my data. Could you help me to find out if there are any neural networks with at least two hidden layer that can accept factor variables?
I'm building a Sequential NN model in Keras for binary classification. The training data has about 600,000 rows and 2,000 features, so every epoch and every layer is very time consuming. I believe many of the features are not relevant to the model, and can be dropped altogether, to make the model thinner, so it it would be faster to work with.
I run a simple model with one hidden-layer of 200 neurons. How can I tell which of the features (which are actually the nodes in the input layer) are meaningless, so I could drop them from the data set and re run the model without them?
There is a very big topic in machine learning called feature selection. Though, neural networks are considered to automatically choose the best features for the problem, to an extent, by using their weights, to either consider more or less some of them. Neural networks also need a lot of experience to be tuned correctly. I would definitely suggest you to increase the layers of the network, because you have a lot of data and features and use l1 regularisation, in order to get sparse weights and exclude most of the features. Also, these information are indicative, since I do not know anything about your dataset and your network architecture. At last, I would suggest you to study more about the basics of machine learning and then continue learning about neural networks, before practicing with real data.
is there any rule to choose the number of layers in convolutional neural network or it's just as try and error ? so if we have an input image of size 256 x 256 how can we create an best mode, how many conv layer, max pool, FC, the filter sizes and learning rate. Also how can we know the optimal number of epochs for training the dataset. Any ideas
There has been a good discussion of this topic on stats.stackexchange:
How to choose the number of hidden layers and nodes in a feedforward neural network?
Otherwise, there is no straight and easy answer to this. I can only suggest to dive into scientific literature:
https://scholar.google.com/scholar?q=optimal+size+of+neural+network&btnG=&hl=en&as_sdt=0%2C5&as_vis=1
I've built some neural networks with TensorFlow, like basic MLPs and convolutional neural networks. Now I want to move on to recurrent neural networks. However, I'm not experienced in natural language processing. Therefore the TensorFlow NLP tutorials for RNNs are not easy to read for me (and not really interesting, too).
Basically I want to start off with something simple, not a LSTM.
How would one build a simple recurrent neural network, like an Elman network, in TensorFlow?
I were only able to find GRU- or LSTM RNN examples for TensorFlow, mostly for NLP. Does anyone know of some simple recurrent neural network tutorials or examples for TensorFlow?
This figure shows a basic Elman network, which is often simply called SRN (simple recurrent network):
One option is to use the built-in RNNCell located in tensorflow/python/ops/rnn_cell.py.
If you don't want to do that you can make your own RNN. The RNN will train using back-propagation through time. Try unrolling the network a fixed number of steps, e.g. consider input sequences of length ten. Then you can write a loop in python to do all of the matrix multiplications for each step of the network. Each time you can take the output from the previous step and concatenate it with the input to that step. It will not be too many lines of code to get this working.