I am using a ImageNet-trained network to extract features and classify my own images. My images are quite different (microscopic images) from cats and dogs but the features extracted from the ImageNet gave quite promising results by classification.
It would be very easy for me to generate millions of small microscopic images which contain no label. Would it be somehow possible to train my own convolutional neural network? My target images for classification after transfer learning are labeled. Is this somehow possible? Or can I use pseudo labeling (e.g. classes of mean(histogram) or whatever)?
Related
Im new to DL and currently learning to develop classification model using PET-CT scans. Apparently there is a series of dcm images per patient. Might be dumb question but those who have developed DL classification models using pydicom, how do you feed those images into the neural network? I am familiar with normal classification, where the input would be something like:
Fruit Classification:
train folder
apples
apple1.jpg
apple2.jpg
apple3.jpg
other fruits...
For classification using PET-CT images:
train folder
class 1
patient1
first check up
dcm1
dcm2
dcmn
second check up
dcm1
dcm2
dcmn
patient2
first check up
dcm1
dcm2
dcmn ...
class others...
In tensorflow/pytorch, would normal loading work, even if there are many subfolders and files for each patient?
I was wondering if it is useful to train a pre-trained resnet (pre-trained with imagenet) with images that are closer to my classification problem. I want to use 50,000 labeled images of trees from a paper to update the weights of the pre-trained resnet. Then I would like to use these weights to re-train and evaluate the resnet, hopefully better fitted this way, with my own set of images of trees.
I already used the pre-trained resnet on my own images with moderate success. Due to the small dataset size (~5,000 imagery) I thought it might be smart to further train the pre-trained resnet with more similar data.
Any suggestions or experiences you want to share?
Where can I find details to implement siamese networks to perform image similarity and to retrieve the most similar image from a dataset
It is difficult to get a large number of image data for all the classes, so only a few images, eg 10 images for some classes, are available for most of the classes.
SIFT or ORB seems to perform poorly on some classes.
My project is to differentiate between the license plates based on the states of the UAE. Here I upload few example images.
When there is few training data, no matter how annoying it sounds, the best approach is usually to collect more. Deep networks are infamously data hungry and their performance is poor when the data is scarce. This said, there are approaches that might help you:
Transfer learning
Data augmentation
In transfer learning, you take an already trained deep net (e.g. ResNet50), which was trained for some other task (e.g. ImageNet), fix all its network weights except for the weights in the last few layers and train on your task of interest.
Data augmentation slightly modifies your training data in some predictable way. In your case you can rotate your image by a small angle, apply a perspective transformation, scale the image intensities or slightly change the colors. You apply a different set of these operations with different parameters every time you want to use a particular training image. This way you generate new training examples enlarging your training set.
I'm trying to look for the classification of images with labels using RNN with custom data. I can't find any example other than the Mnist dataset. Any help like this repository where CNN is used for classification would be grateful. Any help regarding the classification of images using RNN would be helpful. Trying to replace the CNN network of the following tutorial.
Aymericdamien has some of the best examples out there, and they have an example of using an RNN with images.
https://github.com/aymericdamien/TensorFlow-Examples
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/recurrent_network.ipynb
The example is using MNIST, but it can be applied to any image.
However, I'll point out that you're unlikely to find many examples of using an RNN to classify an image because RNNs are inferior to CNNs for most image processing tasks. The example linked to above is for educational purposes more than practical purposes.
Now, if you are attempting to use an RNN because you have a sequence of images you wish to process, such as with a video, in this case a more natural approach would be to combine both a CNN (for the image processing part) with an RNN (for the sequence processing part). To do this you would typically pretrain the CNN on some classification task such as Imagenet, then feed the image through the CNN, then the last layer of the CNN would be the input to each timestep of an RNN. You would then let the entire network train with the loss function defined on the RNN.
I am currently working on some project related to machine learning.
I extracted some features from the object.
So I train and test that features with NB, SVM and other classification algorithms and got result about 70 to 80 %
When I train the same features with neural networks using nolearn.dbn and then test it I got about 25% correctly classified. I had 2 hidden layers.
I still don't understand what is wrong with neural networks.
I hope to have some help.
Thanks
Try increasing the number of hidden units and the learning rate. The power of neural networks comes from the hidden layers. Depending on the size of your dataset, the number of hidden layers can go upto a few thousands. Also, please elaborate on the kind, and number of features you're using. If the feature set is small, you're better off using SVMs and RandomForests instead of neural networks.