From the mnist dataset example I know that the dataset look something like this (60000,28,28) and the labels are (60000,). When, I print the first three examples of Mnist dataset
and I print the first three labels of those which are:
The images and labels are bounded.
I want to know how can I bound a folder with (1200 images) with size 64 and 64 with an excel with a column named "damage", with 5 different classes so I can train a neural network.
Like image of a car door and damage is class 3.
Here's a rough sketch of how you can approach this problem.
Loading each image
The first step is how you pre-process each image. You can use Python Imaging Library for this.
Example:
from PIL import Image
def load_image(path):
image = Image.open(path)
# Images can be in one of several different modes.
# Convert to single consistent mode.
image = image.convert("RGB")
image = image.resize((64, 64))
return image
Optional step: cropping
Cropping the images to focus on the feature you want the network to pay attention to can improve performance, but requires some work for each training example and for each inference.
Loading all images
I would load the images like this:
import glob
import pandas as pd
image_search_path = "image_directory/*.png"
def load_all_images():
images = []
for path in glob.glob(image_search_path):
image = load_image(path)
images.append({
'path': path,
'img': image,
})
return pd.DataFrame(images)
Loading the labels
I would use Pandas to load the labels. Suppose you have an excel file with the columns path and label, named labels.xlsx.
labels = pd.read_excel("labels.xlsx")
You then have the problem that the images that are loaded are probably not in the same order as your file full of labels. You can fix this by merging the two datasets.
images = load_all_images()
images_and_labels = images.merge(labels, on="path", validate="1:1")
# check that no rows were dropped or added, say by a missing label
assert len(images.index) == len(images_and_labels.index)
assert len(labels.index) == len(images_and_labels.index)
Converting images to numpy
Next, you need to convert both the images and labels into a numpy dataframe.
Example for images:
import numpy as np
images_processed = []
for image in images_and_labels['img'].tolist():
image = np.array(image)
# Does the image have expected shape?
assert image.shape == (64, 64, 3)
images_process.append(image)
images_numpy = np.array(images_processed)
# Check that this has the expected shape. You'll need
# to replace 1200 with the number of training examples.
assert images_numpy.shape == (1200, 64, 64, 3)
Converting labels to numpy
Assuming you're setting up a classifier, like MNIST, you'll first want to decide on an ordering of categories, and map each element of that list of categories to its position within that ordering.
The ordering of categories is arbitrary, but you'll want to be consistent about it.
Example:
categories = {
'damage high': 0,
'damage low': 1,
'damage none': 2,
}
categories_num = labels_and_images['label'].map(categories)
# Are there any labels that didn't get mapped to something?
assert categories_num.isna().sum() == 0
# Convert labels to numpy
labels_np = categories_num.values
# Check shape. You'll need to replace 1200 with the number of training examples
assert labels_np.shape == (1200,)
You should now have the variables images_np and labels_np set up as numpy arrays in the same style as the MNIST example.
I'm having some difficulty to get a series of images into the correct format to feed into sklearn.svm.SVC.
This is my first image recognition project, and so Im suffering a bit.
Ive got a loop which brings in a bunch of base64 RGB images (of different sizes) to a dataframe
imageData = mpimg.imread(io.BytesIO(base64.b64decode(value)),format='JPG')
then I convert the RGB image into gray-scale, and flatten
data_images = rgb2gray(imageData).ravel()
where rgb2gray:
def rgb2gray(rgb):
r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2]
gray = 0.2989 * r + 0.5870 * g + 0.1140 * b
return gray
If I look at the size differences
df_raw.sample(10)
We can see that the picture pixel lengths are not the same between my samples. Im a little confused here about how to proceed. For lack of a better idea I decided to add a padding based on the picture with the largest size,
df_raw.picLen.max()
Then appending a number of zeros to the end of each 1D picture array.
def padPic(x,numb,maxN):
N = maxN-len(x)
out = np.pad(x,(numb,N),'constant')
return out
calling
df_raw['picNew'] = df_raw.apply(lambda row: padPic(row['pic'],0,df_raw.picLen.max()), axis=1)
df_raw['picNewLen'] = df_raw.apply(lambda row: len(row['picNew']), axis=1)
I now have arrays all of the same size
From here I attempt to fit a model to support vector algorithm using the picture data as X and a set of labels as y.
from sklearn.svm import SVC
X_train, X_test, y_train, y_test = train_test_split(df_raw.picNew, df_raw.name, test_size = 0.2, random_state=42)
check the size:
print('Training data and target sizes: \n{}, {}'.format(X_train.shape,y_train.shape))
print('Test data and target sizes: \n{}, {}'.format(X_test.shape,y_test.shape))
Training data and target sizes: (198,), (198,) Test data and target
sizes: (50,), (50,)
after Ive convinced myself everything is ready, then I try to fit the model
svm = SVC()
svm.fit(X_train, y_train)
this throws an error, and I cant figure out why:
/opt/wakari/anaconda/envs/ulabenv_2018-11-13_10.15.00/lib/python3.5/site-packages/numpy/core/numeric.py in asarray(a, dtype, order)
499
500 """
--> 501 return array(a, dtype, copy=False, order=order)
502
503
ValueError: setting an array element with a sequence.
I think this must have to do with the array size, but I cant figure it out. :-/
In addition to the error, more generally, I have a question to my approach in general. In particular I think my "padding" is probably incorrect and maybe some resize would be better.
I appreciate any feedback to my methodology. Thanks
I am pretty sure this is due to using list in a feature column and strings as target values. For the latter You need to use LabelEncoder class to turn them to normalized class labels, as required by fit().
See description here:
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
This needs to be done before train/test split to make sure you have all names 'seen' by the LabelEncoder.
For the former, you might want to search for MNIST tutorials, that will provide a plethora of algorightms applied to image classification problems.
Also, Resize before flattening should work better than padding.
Ive figured out the problem.
Thank you to Artem for his catching my obvious problem of not encoding the classes, but this was not my issue in the end.
Turns out the way that my picture array was represented was incorrect.
The original array was df_raw['picNew'].shape which evaluates to
(248,)
What I needed was a 2D representation
np.stack(df_raw['picNew'] , axis=1).shape
(830435, 248)
All good now.
I'm still unsure about the most "correct" way to resize the images to be all the same length. Appending 0 to the array lengths seems a bit unsophisticated... So if anyone has an idea :)
I have been trying to create an image classifier in Python OpenCV 3.2.0 using keypoints and the bag of words technique. After some reading I found that I could peform this as follows
Extract image descriptors using AKAZE
Perform k-means clustering on the descriptors to generate the dictionary
Generate histograms of images based on dictionary
Train SVM using histograms
I managed to do steps 1 and 2 but have gotten stuck on steps 3 and 4.
I generated the histograms by using the labels returned by k-means clustering successfully (I think). However, when I wanted to use new test data that was not used to generate the dictionary I had some unexpected results. I tried to use a FLANN matcher like in this tutorial but the results I get from generating the histograms from the label data does not match the data returned from the FLANN matching.
I load up the images:
dictionary_size = 512
# Loading images
imgs_data = []
# imreads returns a list of all images in that directory
imgs = imreads(imgs_path)
for i in xrange(len(imgs)):
# create a numpy to hold the histogram for each image
imgs_data.insert(i, np.zeros((dictionary_size, 1)))
I then create an array of descriptors (desc):
def get_descriptors(img, detector):
# returns descriptors of an image
return detector.detectAndCompute(img, None)[1]
# Extracting descriptors
detector = cv2.AKAZE_create()
desc = np.array([])
# desc_src_img is a list which says which image a descriptor belongs to
desc_src_img = []
for i in xrange(len(imgs)):
img = imgs[i]
descriptors = get_descriptors(img, detector)
if len(desc) == 0:
desc = np.array(descriptors)
else:
desc = np.vstack((desc, descriptors))
# Keep track of which image a descriptor belongs to
for j in range(len(descriptors)):
desc_src_img.append(i)
# important, cv2.kmeans only accepts type32 descriptors
desc = np.float32(desc)
The descriptors are then clustered using k-means:
# Clustering
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 0.01)
flags = cv2.KMEANS_PP_CENTERS
# desc is a type32 numpy array of vstacked descriptors
compactness, labels, dictionary = cv2.kmeans(desc, dictionary_size, None, criteria, 1, flags)
Then I create histograms for each image using the labels returned from k-means:
# Getting histograms from labels
size = labels.shape[0] * labels.shape[1]
for i in xrange(size):
label = labels[i]
# Get this descriptors image id
img_id = desc_src_img[i]
# imgs_data is a list of the same size as the number of images
data = imgs_data[img_id]
# data is a numpy array of size (dictionary_size, 1) filled with zeros
data[label] += 1
ax = plt.subplot(311)
ax.set_title("Histogram from labels")
ax.set_xlabel("Visual words")
ax.set_ylabel("Frequency")
ax.plot(imgs_data[0].ravel())
This outputs a histogram like this which is very evenly distributed and what I expect.
I then attempt to do the same thing on the same image but using FLANN:
matcher = cv2.FlannBasedMatcher_create()
matcher.add(dictionary)
matcher.train()
descriptors = get_descriptors(imgs[0], detector)
result = np.zeros((dictionary_size, 1), np.float32)
# flan matcher needs descriptors to be type32
matches = matcher.match(np.float32(descriptors))
for match in matches:
visual_word = match.trainIdx
result[visual_word] += 1
ax = plt.subplot(313)
ax.set_title("Histogram from FLANN")
ax.set_xlabel("Visual words")
ax.set_ylabel("Frequency")
ax.plot(result.ravel())
This outputs a histogram like this which is very unevenly distributed and does not match up with the first histogram.
You can view the full code and images on GitHub. Change "imgs_path" (line 20) to a directory with images before running it.
Where am I going wrong? Why are the histograms so different? How do I generate the histograms for new data using the dictionary?
As a side note I tried using the OpenCV BOW implementation but found another issue where it gave the error: "_queryDescriptors.type() == trainDescType in function cv::BFMatcher::knnMatchImpl" and that's why I am trying to implement it myself. If someone could provide a working example using Python OpenCV BOW and AKAZE then that would be just as good.
It seems that you cannot train a FlannBasedMatcher using a dictionary before hand as show below:
matcher = cv2.FlannBasedMatcher_create()
matcher.add(dictionary)
matcher.train()
However you can pass the dictionary in when matching like this:
matcher = cv2.FlannBasedMatcher_create()
...
matches = matcher.match(np.float32(descriptors), dictionary)
I am not entirely sure why this. Perhaps its that the train method is only meant to be used by the match method as hinted in this post.
Also according to the opencv docs the parameters for match are:
queryDescriptors – Query set of descriptors.
trainDescriptors – Train set of descriptors. This set is not added to the train descriptors collection stored in the class object.
matches – Matches. If a query descriptor is masked out in mask , no match is added for this descriptor. So, matches size may be smaller than the query descriptors count.
So I guess you are just supposed to pass the dictionary in as trainDescriptors because that is what it is.
If anyone could shed more light on this it would be appreciated.
Here are the results after using the above method:
You can see the full updated code here.
I want to evaluate if an event is happening in my screen, every time it happens a particular box/image shows up in a screen region with very similar structure.
I have collected a bunch of 84x94 .png RGB images from that screen region and I'd like to build a classifier to tell me if the event is happening or not.
Therefore my idea was to create a pd.DataFrame (df) containing 2 columns, df['np_array'] contains every picture as a np.array and df['is_category'] contains boolean values telling if that image is indicating that the event is happening or not.
The structure looks like this (with != size):
I have resized the images to 10x10 for training and converted to greyscale
df = pd.DataFrame(
{'np_array': [np.random.random((10, 10,2)) for x in range(0,10)],
'is_category': [bool(random.getrandbits(1)) for x in range(0,10)]
})
My problem is that I can't fit a scikit learn classifier by doing clf.fit(df['np_array'],df['is_category'])
I've never tried image recognition before, thanks upfront for any help!
If its a 10x10 grayscale image, you can flatten it:
import numpy as np
from sklearn import ensemble
# generate random 2d arrays
image_data = np.random.rand(10,10, 100)
# generate random labels
labels = np.random.randint(0,2, 100)
X = image_data.reshape(100, -1)
# then use any scikit-learn classification model
clf = ensemble.RandomForestClassifier()
clf.fit(X, y)
By the way, for images the best performing algorithms are convolutional neural networks.
I am experimenting with using OpenCV via the Python 2.7 interface to implement a machine learning-based OCR application to parse text out of an image file. I am using this tutorial (I've reposted the code below for convenience). I am completely new to machine learning, and relatively new to OpenCV.
OCR of Hand-written Digits:
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('digits.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Now we split the image to 5000 cells, each 20x20 size
cells = [np.hsplit(row,100) for row in np.vsplit(gray,50)]
# Make it into a Numpy array. It size will be (50,100,20,20)
x = np.array(cells)
# Now we prepare train_data and test_data.
train = x[:,:50].reshape(-1,400).astype(np.float32) # Size = (2500,400)
test = x[:,50:100].reshape(-1,400).astype(np.float32) # Size = (2500,400)
# Create labels for train and test data
k = np.arange(10)
train_labels = np.repeat(k,250)[:,np.newaxis]
test_labels = train_labels.copy()
# Initiate kNN, train the data, then test it with test data for k=1
knn = cv2.KNearest()
knn.train(train,train_labels)
ret,result,neighbours,dist = knn.find_nearest(test,k=5)
# Now we check the accuracy of classification
# For that, compare the result with test_labels and check which are wrong
matches = result==test_labels
correct = np.count_nonzero(matches)
accuracy = correct*100.0/result.size
print accuracy
# save the data
np.savez('knn_data.npz',train=train, train_labels=train_labels)
# Now load the data
with np.load('knn_data.npz') as data:
print data.files
train = data['train']
train_labels = data['train_labels']
OCR of English Alphabets:
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Load the data, converters convert the letter to a number
data= np.loadtxt('letter-recognition.data', dtype= 'float32', delimiter = ',',
converters= {0: lambda ch: ord(ch)-ord('A')})
# split the data to two, 10000 each for train and test
train, test = np.vsplit(data,2)
# split trainData and testData to features and responses
responses, trainData = np.hsplit(train,[1])
labels, testData = np.hsplit(test,[1])
# Initiate the kNN, classify, measure accuracy.
knn = cv2.KNearest()
knn.train(trainData, responses)
ret, result, neighbours, dist = knn.find_nearest(testData, k=5)
correct = np.count_nonzero(result == labels)
accuracy = correct*100.0/10000
print accuracy
The 2nd code snippet (for the English alphabet) takes input from a .data file in the following format:
T,2,8,3,5,1,8,13,0,6,6,10,8,0,8,0,8
I,5,12,3,7,2,10,5,5,4,13,3,9,2,8,4,10
D,4,11,6,8,6,10,6,2,6,10,3,7,3,7,3,9
N,7,11,6,6,3,5,9,4,6,4,4,10,6,10,2,8
G,2,1,3,1,1,8,6,6,6,6,5,9,1,7,5,10
S,4,11,5,8,3,8,8,6,9,5,6,6,0,8,9,7
B,4,2,5,4,4,8,7,6,6,7,6,6,2,8,7,10
...there's about 20,000 lines of that. The data describes contours of characters.
I have a basic grasp on how this works, but I am confused as to how I can use this to actually perform OCR on an image. How can I use this code to write a function that takes a cv2 image as a parameter and returns a string representing the recognized text?
In general, machine-learning works like this: First you must train your program in understanding the domain of your problem. Then you start asking questions.
So if you are creating an OCR the first step is teaching your program what an A letter looks like, and the B and so on.
You use OpenCV to clear the image from noise and identify groups of pixels that could be letters and isolate them.
Then you feed those letters to your OCR program. On training mode, you will feed the image and explain what letter the image represents. On asking mode, you will feed the image and ask which letter it is. The better the training the more accurate is your answer will be (the program could get the letter wrong, there is always a chance of that).