Using your own Data in Tensorflow - python

I already know how to make a neural network using the mnist dataset. I have been searching for tutorials on how to train a neural network on your own dataset for 3 months now but I'm just not getting it. If someone can suggest any good tutorials or explain how all of this works, please help.
PS. I won't install NLTK. It seems like a lot of people are training their neural network on text but I won't do that. If I would install NLTK, I would only use it once.

I suggest you use OpenCV library. Whatever you uses your MNIST data or PIL, when it's loaded, they're all just NumPy arrays. If you want to make MNIST datasets fit with your trained model, here's how I did it:
1.Use cv2.imread to load all the images you want them to act as training datasets.
2.Use cv2.cvtColor to convert all the images into grayscale images and resize them into 28x28.
3.Divide each pixel in all the datasets by 255.
4.Do the training as usual!
I haven't tried to make it your own format, but theoratically it's the same.

Related

How to use tensorflow model for predicting my own images

I've just started with tensorflow. I wrote a program that uses Fashion_MNIST dataset to train the model. And then predicts the labels using 'test_images'and it's working good so far.
But what I am curious how can I use my own image of a shoe or shirt for prediction. Because all the test images are of shape 28*28. How can I do this ?
The task you are engaged in is the task of data preparation and preprocessing. Among the things you must do already having a directory with images is the tagging of the images, for this task I recommend labelImg.
If you also need the dimensionality of the input to be of a specific size like the example you give, you can use digital image processing software. The OpenCV library has dimensionality reduction tools that work for this.

How to import data into Tensorflow?

I am new to Tensorflow and to implementing deep learning. I have a dataset of images (images of the same object).
I want to train a Neural Network model using python and Tensorflow for object detection.
I am trying to import the data to Tensorflow but I am not sure what is the right way to do it.
Most of the tutorials available online are using public datasets (i.e. MNIST), which importing is straightforward but not helpful in the case where I need to use my own data.
Is there a procedure or tutorial that i can follow?
There are many ways to import images for training, you can use Tensorflow but these will be imported as Tensorflow objects, which you won't be able to visualize until you run the session.
My favorite tool to import images is skimage.io.imread. The imported images will have the dimension (width, height, channels)
Or you can use importing tool from scipy.misc.
To resize images, you can use skimage.transform.resize.
Before training, you will need to normalize all the images to have the values between 0 and 1. To do that, you simply divide the images by 255.
The next step is to one hot encode your labels to be an array of 0s and 1s.
Then you can build and train your CNN.
You could create a data directory containing one subdirectory per image class containing the respective image files and use flow_from_directory of tf.keras.preprocessing.image.ImageDataGenerator.
A tutorial on how to use this can be found in the Keras Blog.

Any better approach to solve MemoryError?

I am working on deep learning. I am using Keras with tensorflow backend, and I have 36980 images to train. I want to use VGG16, so I resized all of them to (224*224) size. So the size of the train array is around 22GB (36980*224*224*3*4 bytes). When I try to load this amount data into a numpy array, python shows MemoryError.
I have thought of splitting the training set into 10 pieces and train my model on only one of such piece at a time. Is there any better approach to solve this problem? I am using Python 3 (64 bit version).
N.B.
To get a good accuracy, I need as large image as possible, so I can't resize them to a smaller size. Moreover, it's necessary to use RGB images here.
No fit_generator() solution please. A model trained using fit_generator() behaves abnormally while predicting, at least as far as I have seen.

Building a simple image recognition engine by tensorflow cifar10 example

I would like to make a simple engine to classify image dataset and I am asking for guide or help.
I have already trained dataset and save the train model (1000000) and eval about 86.6%.
Then, Here is steps that I would like to follow:
Download image and convert into tensorflow dataset (I am not sure since it's all converted to bin type
Input image on trained model by cifar10 and test whether this image is dog, cat or sth else (print value would be... this image would be dog with 70% accuracy)
or distribute image folder if I input several images.
The whole purpose of this is to visualize all the process and use it in real by tensorflow.
I would appreciate if anyone could at least give me the ref.

Image classification using SVM - Python

I have a set of images classified as good quality image and bad quality image. I have to train a classification model so that any new image can be classified as good/bad. SVM seems to be the best approach to do it. I have done image processing in MATLAB but not in python.
Can anyone suggest how to do it in python? What are the libraries? For SVM scikit is there, what about feature extraction of image and PCA?
I would start reading this simple tutorial and then move into the OpenCV tutorials for Python. Also, if you are familiar with the sklearn interface there is Scikit-Image.
I am using opencv 2.4,python 2.7 and pycharm
SVM is a machine learning model for data classification.Opencv2.7 has pca and svm.The steps for building an image classifier using svm is
Resize each image
convert to gray scale
find PCA
flat that and append it to training list
append labels to training labels
Sample code is
for file in listing1:
img = cv2.imread(path1 + file)
res=cv2.resize(img,(250,250))
gray_image = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
xarr=np.squeeze(np.array(gray_image).astype(np.float32))
m,v=cv2.PCACompute(xarr)
arr= np.array(v)
flat_arr= arr.ravel()
training_set.append(flat_arr)
training_labels.append(1)
Now Training
trainData=np.float32(training_set)
responses=np.float32(training_labels)
svm = cv2.SVM()
svm.train(trainData,responses, params=svm_params)
svm.save('svm_data.dat')
I think this will give you some idea.
Take a look at dlib and opencv. Both are mature computer vision frameworks implemented in C++ with python bindings. That is important because it means it is relying on compiled code under the hood so it is significantly faster than if it was done in straight python. I believe the implementation of the SVM in dlib is based on more resent research at the moment so you may want to take that into consideration as you may get better results using it.

Categories