What it mean by Training SVM - python

I am new to image processing.As my project i am doing "image classifier using SVM".I have the idea of my final software "I select some image and give it as input to my software and it will classify that image .if i give the image of an animal it will classify it to cat or snake suitably"
When I google about it.it says "First you need to train SVM"
What it mean by Training SVM?
What is the actual input to SVM in my case(image classification)?
SVM is just a classifier how it classify images.Is it necessary for me to covert image to any particular format?.please help.

Support Vector Machine (SVM) is a machine learning model for supervised data classification. SVMs essentially learn a hyper-plane which separates the data space into 2 regions (in 2 class case). In your case, suppose you have images of snakes and cats and you need to classify them. The steps you'll need to follow are
Extract 'features' from the images.
These 'features' may be functions of appearance of snake/cat in your case e.g colour of the animal, shape of the animal etc. By concatenating these features you can get a multi-dimensional feature vector.
Train an SVM classifier
Training essentially learns a separating hyper-plane between the feature vectors of snake class and cat class . For example, if your feature vector is 2-dimensional, training an SVM classifier would amount to 'learning' a line which best separates your labeled-data/training-data.
You could use any of the multitude of freely available libraries of machine learning. In case you speak python, you could use sklearn for the task.
This task of learning (hyper-plane in linear SVM) is referred to training.
Classify the images.
Once you have trained your model, you could then use it classify images whose class is not known.
Note: I am simplifying a lot of details/issues involved in this answer. I suggest you should read-up about SVM

Related

Using tensorflow classification for feature extraction

I am currently working on a system that extracts certain features out of 3D-objects (Voxelgrids to be precise), and i would like to compare those features to automatically made features when it comes to performance (classification) in a tensorflow cNN with some other data, but that is not the point here, just for background.
My idea now was, to take a dataset (modelnet10), train a tensorflow cNN to classify them, and then use what it learned there on my dataset - not to classify, but to extract features.
So i want to throw away everything the cnn does,except for what it takes from the objects.
Is there anyway to get these features? and how do i do that? i certainly have no idea.
Yes, it is possible to train models exclusively for feature extraction. This is called transfer learning where you can either train your own model and then extract the features or you can extract features from pre-trained models and then use it in your task if your task is similar in nature to that of what the pre-trained model was trained for. You can of course find a lot of material online for these topics. However, I am providing some links below which give details on how you can go about it:
https://keras.io/api/applications/
https://keras.io/guides/transfer_learning/
https://machinelearningmastery.com/how-to-use-transfer-learning-when-developing-convolutional-neural-network-models/
https://www.pyimagesearch.com/2019/05/27/keras-feature-extraction-on-large-datasets-with-deep-learning/
https://www.kaggle.com/angqx95/feature-extractor-fine-tuning-with-keras

CNN on python with Keras

I made a simple CNN that classifies dogs and cats and I want this CNN to detect images that aren't cats or dogs and this must be a third different class. How to implement this? should I use R-CNN or something else?
P.S I use Keras for CNN
What you want to do is called "transfer learning" using the learned weights of a net to solve a new problem.
Please note that this is very hard and acts under many constraints i.e. using a CNN that can detect cars to detect trucks is simpler than using a CNN trained to detect people to also detect cats.
In any case you would use your pre-trained model, load the weights and continue to train it with new data and examples.
Whether this is faster or indeed even better than simply training a new model on all desired classes depends on the actual implementation and problem.
Tl:Dr
Transfer learning is hard! Unless you know what you are doing or have a specific reason, just train a new model on all classes.
You can train that almost with the same architecture (of course it depends on this architecture, if it is already bad then it will not be useful on more classes too. I would suggest to use the state of the art model architecture for dogs and cats classification) but you will also need the dogs and cats dataset in addition to this third class dataset. Unfortunately, it is not possible to use pre-trained for making predictions between all 3 classes by only training on the third class later.
So, cut to short, you will need to have all three datasets and train the model from scratch if you want to make predictions between these three classes otherwise use the pre-trained and after training it on third class it can predict if some image belongs to this third class of not.
You should train with new_category by add one more category, it contains images that are not in 2 category before. I mean
--cat_dir
-*.jpg
--dog_dir
-*.jpg
--not_at_all_dir
-*.jpg
so.. total categories you will train are 3 categories.
(categories or classes whatever it is)
then add the output of final dense fullyconnected (3 categories)

Probability for correct Image Classification in Tensorflow

I am using Tensorflow retraining model for Image Classification. I am doing single label classification.
I want to set a threshold for correct classification.
In other words, if the highest probability is less than a given threshold, I can say that the image is "unknown" i.e. if np.max(results) < 0.5 -> set label as "unknown".
So, is there any industry standard to set this threshold. I can set a random value say 60%, but is there any literature to back this threshold ?
Any links or references will be very helpful.
Thanks a lot.
Single label classification is not something Neural Networks can do "off-the-shelf".
How do you train it ? With only data relevant to your target domain ? Your model will only learn to output one.
You have two strategies:
you use the same strategy as in the "HotDog or Not HotDog app", you put the whole imagenet in two different folders, one with the class you want, the other one containing everything else.
You use the convnet as feature extractor and then use a second model like a One-Class SVM.
You have to understand that doing one class classification is not a simple and direct problem like binary classification could be.

Image classification using SVM Python

I am currently working on a projet to perform image recognition. There is a big set of images and I have to predict whether or not an image contains given characteristics.
For example, the output could be whether or not there is a banana in the picture.
I would like to implement a classifier using SVM with output yes or no the image contains the given characteristics. What is the simplest way to train a SVM classifier on images with 2 outputs? Is there any template to use in Python? Thanks a lot.
With SVM you can classify set of images.For example You can train svm with set of car and plane images.Once you trained it can predict the class of an unknown images as whether it is car or plane.There is also multiclass SVM.
In your case,Make two sets of images for training SVM
Set of images that contain given characteristics(banana)
Set of images that doesn't contain that characteristics
Once your training phase completed it will output to which class the given image belong.If its in banana class you can output as Yes otherwise No.
Usefull links
Hand written Digit Recognition using python opencv
Squirrel and Bird Classifier using java
Edit
Fruit classifier using python

How to apply word2vec on images?

I have been studying word2vec model by Google. I was able to generate vectors for text word corpus for maximum 300 dimensions. It is a very impressive tool and accuracy goes much further, on big data.
I am curious, is there any way to use word2vec to generate vectors on grayscale images. I am sure the approach is same, you generate vectors based on pixel intensity and then compute a cosine similarity.
I am trying to do build a model to compute similarity distance on grayscale images. Any library is capable of doing this besides word2vec or glove that works on text?
I agree with you that word2vec is very impressive tool, but this model is trained by predicting the next word in some article or news. All in all, I think that using word2vec on image does not make sense.
You can use skimage to do some image measure. e.g skimage-measure
Word2vec is not a good model for images, however I think what you really need is a bag of word model. In a basic method of image comparison, you convert images to a list of key point features (e.g. SIFT, SURF or etc.), then you match clusters of points with each other (e.g. FLANN).
The high amount of features in an image and uncertainty of each point representation makes it difficult to use a basic one layer network learning such as word2vec for image recognition. You may find better examples in this tutorials.
UPDATE after 3 years: I should also mention ConvNets and several pre-trained models available now which you can extract visual features from pixels.

Categories