Image classification using SVM - Python - python

I have a set of images classified as good quality image and bad quality image. I have to train a classification model so that any new image can be classified as good/bad. SVM seems to be the best approach to do it. I have done image processing in MATLAB but not in python.
Can anyone suggest how to do it in python? What are the libraries? For SVM scikit is there, what about feature extraction of image and PCA?

I would start reading this simple tutorial and then move into the OpenCV tutorials for Python. Also, if you are familiar with the sklearn interface there is Scikit-Image.

I am using opencv 2.4,python 2.7 and pycharm
SVM is a machine learning model for data classification.Opencv2.7 has pca and svm.The steps for building an image classifier using svm is
Resize each image
convert to gray scale
find PCA
flat that and append it to training list
append labels to training labels
Sample code is
for file in listing1:
img = cv2.imread(path1 + file)
res=cv2.resize(img,(250,250))
gray_image = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
xarr=np.squeeze(np.array(gray_image).astype(np.float32))
m,v=cv2.PCACompute(xarr)
arr= np.array(v)
flat_arr= arr.ravel()
training_set.append(flat_arr)
training_labels.append(1)
Now Training
trainData=np.float32(training_set)
responses=np.float32(training_labels)
svm = cv2.SVM()
svm.train(trainData,responses, params=svm_params)
svm.save('svm_data.dat')
I think this will give you some idea.

Take a look at dlib and opencv. Both are mature computer vision frameworks implemented in C++ with python bindings. That is important because it means it is relying on compiled code under the hood so it is significantly faster than if it was done in straight python. I believe the implementation of the SVM in dlib is based on more resent research at the moment so you may want to take that into consideration as you may get better results using it.

Related

How to use tensorflow model for predicting my own images

I've just started with tensorflow. I wrote a program that uses Fashion_MNIST dataset to train the model. And then predicts the labels using 'test_images'and it's working good so far.
But what I am curious how can I use my own image of a shoe or shirt for prediction. Because all the test images are of shape 28*28. How can I do this ?
The task you are engaged in is the task of data preparation and preprocessing. Among the things you must do already having a directory with images is the tagging of the images, for this task I recommend labelImg.
If you also need the dimensionality of the input to be of a specific size like the example you give, you can use digital image processing software. The OpenCV library has dimensionality reduction tools that work for this.

Processing Image for machine learning

I started to do the medical image analysis for a project.
In this project I have images of human kidney(s) with and without stones. The aim is to predict if the given new image has stone or not.
I chose the KNN classifier model to do classification but I do not understand the image processing. I have some knowledge on segmentation. I can convert it into array for processing but I need some pointers to understand the process.
Image - https://i.stack.imgur.com/9FDUM.jpg
For image classification I would recommend you to use pre-trained neural networks like Resnet etc.
Frameworks like Tensorflow give a good API to re-train pre-trainined neural networks for a different use-case.
You can follow below link:
https://www.tensorflow.org/hub/tutorials/image_retraining
Image Processing is done to convert the digital images into a format which would be easier for a computer to calculate statistics on.
Images do not always contain the necessary information, there is noise and lots of unnecessary background information available in the image which won't be required for a specific purpose.
The Goal of processing an image is to extract the region of interest from the whole image.
Along with this various enhancements are done to the image so that we get features that are useful in calculating inferences
Processing an image consists of various image enhancement techniques and segmentation and other stuff like maybe a histogram equalization which in the end would be used to extract features. Doing this processing yields better features generally.
Also Image processing in itself is a vast topic. I recommend you read about it in papers from Google scholar

Using your own Data in Tensorflow

I already know how to make a neural network using the mnist dataset. I have been searching for tutorials on how to train a neural network on your own dataset for 3 months now but I'm just not getting it. If someone can suggest any good tutorials or explain how all of this works, please help.
PS. I won't install NLTK. It seems like a lot of people are training their neural network on text but I won't do that. If I would install NLTK, I would only use it once.
I suggest you use OpenCV library. Whatever you uses your MNIST data or PIL, when it's loaded, they're all just NumPy arrays. If you want to make MNIST datasets fit with your trained model, here's how I did it:
1.Use cv2.imread to load all the images you want them to act as training datasets.
2.Use cv2.cvtColor to convert all the images into grayscale images and resize them into 28x28.
3.Divide each pixel in all the datasets by 255.
4.Do the training as usual!
I haven't tried to make it your own format, but theoratically it's the same.

What features of images will produce good result used in SVM multiclass image classification?

I am using opencv2.4 and python 2.7.What features of images can be used for svm classification.I have gone through surf and sift but as a beginner it seems very difficult to me.What are the other feature extraction techniques?
If you are looking for simplest representation then this will help you.These two are very simple compared to other SIFT and SURF
Bitmap representation
HOG-Histogram of Gradients
SVM is a machine learning model for data classification.I have built a simple svm classifier.If you have two folder of images,birds and squireels.The steps i followed are
Extracted Hog features of Images and append that in a list
for file in listing1:
img = cv2.imread(path1 + file)
res=cv2.resize(img,(250,250))
h=hog(res)
training_set.append(h)
append the labels also
training_labels.append(1)
convert both lists to numpy array.
trainData=np.float32(training_set)
responses=np.float32(training_labels)
Train SVM
svm.train(trainData,responses, params=svm_params)
Test SVM
result = svm.predict_all(testData)
print result

Built in function for bag of visual words encoding in python

In matlab I have this function
function psi = encodeImage(encoder, im)
This function takes
im which is list of names of images
encoder which is bovw.mat I have this file as encoder
This function does bag of visual words encoding and returns the spatial histograms of images.
I use this histograms for training in SVM classifier.
I am doing this task in python and I don't want to implement the bag of visual words encoding as my main task is to implement SVM. Is there a built in function in python that does bag of visual words encoding and returns spatial histograms so I can train SVM classifier on histograms.
Are you doing something similar to this?
http://www.robots.ox.ac.uk/~vgg/practicals/category-recognition/index.html
There is a computer vision library called Vlfeat. It's Matlab version is in active development state. However there exists a Python interface as well.
Supports all major image processing features:
Scale-Invariant Feature Transform (SIFT)
Dense SIFT (DSIFT)
Integer k-means (IKM)
Hierarchical Integer k-means (HIKM)
Maximally Stable Extremal Regions (MSER)
Quick shift image segmentation
I am not sure whether Pyvlfeat will be sufficient or not. In fact, I was trying to do the same. Couldn't figure out. If it works, awesome, mention the trick in comment. If you have figured out some other method, please mention that too.

Categories