Correlating two skeletized images : Python - python

import cv2
import numpy as np
from PIL import Image
from skimage import morphology
from scipy import signal
img = cv2.imread('thin.jpg',0)
img1 = cv2.imread('thin1.jpg',0)
cv2.imshow('image1',img)
cv2.imshow('image2',img1)
ret,img = cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
ret,img1 = cv2.threshold(img1,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
size = np.size(img)
size1 = np.size(img1)
skel = np.zeros(img.shape,np.uint8)
skel1 = np.zeros(img1.shape,np.uint8)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))
img = 255 - img
img1 = 255 - img1
img = cv2.dilate(img,element,iterations=8)
img1 = cv2.dilate(img1,element,iterations=8)
done = False
while(not done):
eroded = cv2.erode(img,element)
eroded1 = cv2.erode(img1,element)
temp = cv2.dilate(eroded,element)
temp1 = cv2.dilate(eroded1,element)
temp = cv2.subtract(img,temp)
temp1 = cv2.subtract(img1,temp1)
skel = cv2.bitwise_or(skel,temp)
skel1 = cv2.bitwise_or(skel1,temp1)
img = eroded.copy()
img1 = eroded1.copy()
zeros = size - cv2.countNonZero(img)
if zeros==size:
done = True
cv2.imshow('IMAGE',skel)
cv2.imshow('TEMPLATE',skel1)
cv2.imwrite("image.jpg",skel)
if cv2.waitKey(0) & 0xFF == ord('q'):
cv2.destroyAllWindows()
This is the code that i tried to convert two grayscale image to two skeletized image using the method of binarization and thinning and the result is also obtained. Now with these two skeletized image , i want to do a comparison to see whether they match or not. How can i correlate each other? Do we need to convert this skeletized into 2d array? Can anyone suggest any solution. Thanks in advance.

There are a number of ways you can compare the images to see if they match. The simplest is to do a pixelwise subtraction to create a new image and then sum the pixels in the new image. If they sum to zero you have an exact match. The larger the sum the worse the match.
You will however have a problem using most comparison techniques on a skeletonized image. You take the image and reduce it to skinny little lines that are unlikely to overlap for images that only deviate from each other by a little bit.
With skeletonized images you often need to compare features. For example, identify the points of intersection of the skeleton, and use the location of those points for comparing images. In your sample image you might be able to extract the lines (I see three major ones) and then compare images based on the location of the lines.

Binary images are already represented as 2D numpy arrays.
This is a complex problem. You can do this by reshaping the images to two vectors (assuming they are exactly the same size), and then calculating the correlation coefficient:
np.corrcoef(img.reshape(-1), img1.reshape(-1))

One possible solution would be to correlate (or subtract) the blurred version of each skeletonized image with one another.
That way, the unavoidable little offsets between skeleton lines wouldn't have such a negative impact on the outcome as if you subtracted the skeletons directly (since the skeleton lines would most probably not overlay exactly over one another).
I'm assuming here that the original images weren't similar to each other in the first place, otherwise you wouldn't need to skeletonize them, right?

Related

Add diff of images into one image (Linux/Python)

I'm looking for a way to blend only the differences of images into one image. I'm looking for a linux command or a way to achieve this with python.
Example:
Source images:
The result should be:
Another usecase:
http://3.bp.blogspot.com/-h3yuVc0hyvc/ToqQDE0Bf4I/AAAAAAAAGj0/HON-gM_9PhU/s1600/JayBumpOllieStichedFinishedRS.jpg
Thanks!!
Vince
It would make sense to start from the image that contains background only and compare each frame with it. The background can be computed as median over the whole sequence. If we assume that background median image was a0.jpg and following three frames with 3 dots would be a1.jpg, a2.jpg and a3.jpg, then merging them together can be done using compare_images function of the scikit-image and modifying the values only at those pixels where the change was encountered. Note that due to compression there is a tolerance threshold (th) set to 0.1. You can play with that value (0,1) for more or less sensitivity.
Following script should to something like that:
import skimage.io as io
from skimage.util import compare_images
import numpy as np
im0 = io.imread('a0.jpg') # median of source images
im1 = io.imread('a1.jpg') # source image 1
im2 = io.imread('a2.jpg') # source image 2
im3 = io.imread('a3.jpg') # source image 3
im_all = np.copy(im0)
th = 0.1
# d = np.max(np.abs(im2 - im0), -1)
d = compare_images(im1, im0, method='diff')
d= np.max(np.abs(d), -1)
im_all[d>th] = im1[d>th]
io.imsave("d1.jpg", d>th)
d = compare_images(im2, im0, method='diff')
d= np.max(np.abs(d), -1)
im_all[d>th] = im2[d>th]
io.imsave("d2.jpg", d>th)
d = compare_images(im3, im0, method='diff')
d= np.max(np.abs(d), -1)
im_all[d>th] = im3[d>th]
io.imsave("d3.jpg", d>th)
io.imsave("im_all.jpg", im_all)
This is not exactly what I asked, but it does the job good enough for my needs:
convert 1.jpg 2.jpg 3.jpg -evaluate-sequence max evalresult.png
With the example image with the clouds it doesn't work really good (because the clouds are white), but in another context it is great (when the differences are brighter than the background)

Detect and count number of different pixels between two images with OpenCV Python

Similar to the question here but I'd like to return a count of the total number of different pixels between the two images.
I'm sure it is doable with OpenCV in Python but I'm not sure where to start.
Assuming that the size of two images is the same
import numpy as np
import cv2
im1 = cv2.imread("im1.jpg")
im2 = cv2.imread("im2.jpg")
# total number of different pixels between im1 and im2
np.sum(im1 != im2)
You can use openCVs absDiff to get the difference between the images, then countNonZero to get the number of differing pixels.
img1 = cv2.imread('img1.png')
img2 = cv2.imread('img2.png')
difference = cv2.absdiff(img1, img2)
num_diff = cv2.countNonZero(difference)
Since cv2 images are just numpy arrays of shape (height, width, num_color_dimensions) for color images, and (height, width) for black and white images, this is easy to do with ordinary numpy operations. For black/white images, we sum the number of differing pixels:
(img1 != img2).sum()
(Note that True=1 and False=0, so we can sum the array to get the number of True elements.)
For color images, we want to find all pixels where any of the components of the color differ, so we first perform a check if any of the components differ along that axis (axis=2, since the shape components are zero-indexed):
(img1 != img2).any(axis=2).sum()

Proper image thresholding to prepare it for OCR in python using opencv

I am really new to opencv and a beginner to python.
I have this image:
I want to somehow apply proper thresholding to keep nothing but the 6 digits.
The bigger picture is that I intend to try to perform manual OCR to the image for each digit separately, using the k-nearest neighbours algorithm on a per digit level (kNearest.findNearest)
The problem is that I cannot clean up the digits sufficiently, especially the '7' digit which has this blue-ish watermark passing through it.
The steps I have tried so far are the following:
I am reading the image from disk
# IMREAD_UNCHANGED is -1
image = cv2.imread(sys.argv[1], cv2.IMREAD_UNCHANGED)
Then I'm keeping only the blue channel to get rid of the blue watermark around digit '7', effectively converting it to a single channel image
image = image[:,:,0]
# openned with -1 which means as is,
# so the blue channel is the first in BGR
Then I'm multiplying it a bit to increase contrast between the digits and the background:
image = cv2.multiply(image, 1.5)
Finally I perform Binary+Otsu thresholding:
_,thressed1 = cv2.threshold(image,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
As you can see the end result is pretty good except for the digit '7' which has kept a lot of noise.
How to improve the end result? Please supply the image example result where possible, it is better to understand than just code snippets alone.
You can try to medianBlur the gray(blur) image with different kernels(such as 3, 51), divide the blured results, and threshold it. Something like this:
#!/usr/bin/python3
# 2018/09/23 17:29 (CST)
# (中秋节快乐)
# (Happy Mid-Autumn Festival)
import cv2
import numpy as np
fname = "color.png"
bgray = cv2.imread(fname)[...,0]
blured1 = cv2.medianBlur(bgray,3)
blured2 = cv2.medianBlur(bgray,51)
divided = np.ma.divide(blured1, blured2).data
normed = np.uint8(255*divided/divided.max())
th, threshed = cv2.threshold(normed, 100, 255, cv2.THRESH_OTSU)
dst = np.vstack((bgray, blured1, blured2, normed, threshed))
cv2.imwrite("dst.png", dst)
The result:
Why not just keep values in the image that are above a certain threshold?
Like this:
import cv2
import numpy as np
img = cv2.imread("./a.png")[:,:,0] # the last readable image
new_img = []
for line in img:
new_img.append(np.array(list(map(lambda x: 0 if x < 100 else 255, line))))
new_img = np.array(list(map(lambda x: np.array(x), new_img)))
cv2.imwrite("./b.png", new_img)
Looks great:
You could probably play with the threshold even more and get better results.
It doesn't seem easy to completely remove the annoying stamp.
What you can do is flattening the background intensity by
computing a lowpass image (Gaussian filter, morphological closing); the filter size should be a little larger than the character size;
dividing the original image by the lowpass image.
Then you can use Otsu.
As you see, the result isn't perfect.
I tried a slightly different approach then Yves on the blue channel:
Apply median filter (r=2):
Use Edge detection (e.g. Sobel operator):
Automatic thresholding (Otsu)
Closing of the image
This approach seems to make the output a little less noisy. However, one has to address the holes in the numbers. This can be done by detecting black contours which are completely surrounded by white pixels and simply filling them with white.

Remove background of the image using opencv Python

I have two images, one with only background and the other with background + detectable object (in my case its a car). Below are the images
I am trying to remove the background such that I only have car in the resulting image. Following is the code that with which I am trying to get the desired results
import numpy as np
import cv2
original_image = cv2.imread('IMG1.jpg', cv2.IMREAD_COLOR)
gray_original = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
background_image = cv2.imread('IMG2.jpg', cv2.IMREAD_COLOR)
gray_background = cv2.cvtColor(background_image, cv2.COLOR_BGR2GRAY)
foreground = np.absolute(gray_original - gray_background)
foreground[foreground > 0] = 255
cv2.imshow('Original Image', foreground)
cv2.waitKey(0)
The resulting image by subtracting the two images is
Here is the problem. The expected resulting image should be a car only.
Also, If you take a deep look in the two images, you'll see that they are not exactly same that is, the camera moved a little so background had been disturbed a little. My question is that with these two images how can I subtract the background. I do not want to use grabCut or backgroundSubtractorMOG algorithm right now because I do not know right now whats going on inside those algorithms.
What I am trying to do is to get the following resulting image
Also if possible, please guide me with a general way of doing this not only in this specific case that is, I have a background in one image and background+object in the second image. What could be the best possible way of doing this. Sorry for such a long question.
I solved your problem using the OpenCV's watershed algorithm. You can find the theory and examples of watershed here.
First I selected several points (markers) to dictate where is the object I want to keep, and where is the background. This step is manual, and can vary a lot from image to image. Also, it requires some repetition until you get the desired result. I suggest using a tool to get the pixel coordinates.
Then I created an empty integer array of zeros, with the size of the car image. And then I assigned some values (1:background, [255,192,128,64]:car_parts) to pixels at marker positions.
NOTE: When I downloaded your image I had to crop it to get the one with the car. After cropping, the image has size of 400x601. This may not be what the size of the image you have, so the markers will be off.
Afterwards I used the watershed algorithm. The 1st input is your image and 2nd input is the marker image (zero everywhere except at marker positions). The result is shown in the image below.
I set all pixels with value greater than 1 to 255 (the car), and the rest (background) to zero. Then I dilated the obtained image with a 3x3 kernel to avoid losing information on the outline of the car. Finally, I used the dilated image as a mask for the original image, using the cv2.bitwise_and() function, and the result lies in the following image:
Here is my code:
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Load the image
img = cv2.imread("/path/to/image.png", 3)
# Create a blank image of zeros (same dimension as img)
# It should be grayscale (1 color channel)
marker = np.zeros_like(img[:,:,0]).astype(np.int32)
# This step is manual. The goal is to find the points
# which create the result we want. I suggest using a
# tool to get the pixel coordinates.
# Dictate the background and set the markers to 1
marker[204][95] = 1
marker[240][137] = 1
marker[245][444] = 1
marker[260][427] = 1
marker[257][378] = 1
marker[217][466] = 1
# Dictate the area of interest
# I used different values for each part of the car (for visibility)
marker[235][370] = 255 # car body
marker[135][294] = 64 # rooftop
marker[190][454] = 64 # rear light
marker[167][458] = 64 # rear wing
marker[205][103] = 128 # front bumper
# rear bumper
marker[225][456] = 128
marker[224][461] = 128
marker[216][461] = 128
# front wheel
marker[225][189] = 192
marker[240][147] = 192
# rear wheel
marker[258][409] = 192
marker[257][391] = 192
marker[254][421] = 192
# Now we have set the markers, we use the watershed
# algorithm to generate a marked image
marked = cv2.watershed(img, marker)
# Plot this one. If it does what we want, proceed;
# otherwise edit your markers and repeat
plt.imshow(marked, cmap='gray')
plt.show()
# Make the background black, and what we want to keep white
marked[marked == 1] = 0
marked[marked > 1] = 255
# Use a kernel to dilate the image, to not lose any detail on the outline
# I used a kernel of 3x3 pixels
kernel = np.ones((3,3),np.uint8)
dilation = cv2.dilate(marked.astype(np.float32), kernel, iterations = 1)
# Plot again to check whether the dilation is according to our needs
# If not, repeat by using a smaller/bigger kernel, or more/less iterations
plt.imshow(dilation, cmap='gray')
plt.show()
# Now apply the mask we created on the initial image
final_img = cv2.bitwise_and(img, img, mask=dilation.astype(np.uint8))
# cv2.imread reads the image as BGR, but matplotlib uses RGB
# BGR to RGB so we can plot the image with accurate colors
b, g, r = cv2.split(final_img)
final_img = cv2.merge([r, g, b])
# Plot the final result
plt.imshow(final_img)
plt.show()
If you have a lot of images you will probably need to create a tool to annotate the markers graphically, or even an algorithm to find markers automatically.
The problem is that you're subtracting arrays of unsigned 8 bit integers. This operation can overflow.
To demonstrate
>>> import numpy as np
>>> a = np.array([[10,10]],dtype=np.uint8)
>>> b = np.array([[11,11]],dtype=np.uint8)
>>> a - b
array([[255, 255]], dtype=uint8)
Since you're using OpenCV, the simplest way to achieve your goal is to use cv2.absdiff().
>>> cv2.absdiff(a,b)
array([[1, 1]], dtype=uint8)
I recommend using OpenCV's grabcut algorithm. You first draw a few lines on the foreground and background, and keep doing this until your foreground is sufficiently separated from the background. It is covered here: https://docs.opencv.org/trunk/d8/d83/tutorial_py_grabcut.html
as well as in this video: https://www.youtube.com/watch?v=kAwxLTDDAwU

Remove features from binarized image

I wrote a little script to transform pictures of chalkboards into a form that I can print off and mark up.
I take an image like this:
Auto-crop it, and binarize it. Here's the output of the script:
I would like to remove the largest connected black regions from the image. Is there a simple way to do this?
I was thinking of eroding the image to eliminate the text and then subtracting the eroded image from the original binarized image, but I can't help thinking that there's a more appropriate method.
Sure you can just get connected components (of certain size) with findContours or floodFill, and erase them leaving some smear. However, if you like to do it right you would think about why do you have the black area in the first place.
You did not use adaptive thresholding (locally adaptive) and this made your output sensitive to shading. Try not to get the black region in the first place by running something like this:
Mat img = imread("desk.jpg", 0);
Mat img2, dst;
pyrDown(img, img2);
adaptiveThreshold(255-img2, dst, 255, ADAPTIVE_THRESH_MEAN_C,
THRESH_BINARY, 9, 10); imwrite("adaptiveT.png", dst);
imshow("dst", dst);
waitKey(-1);
In the future, you may read something about adaptive thresholds and how to sample colors locally. I personally found it useful to sample binary colors orthogonally to the image gradient (that is on the both sides of it). This way the samples of white and black are of equal size which is a big deal since typically there are more background color which biases estimation. Using SWT and MSER may give you even more ideas about text segmentation.
I tried this:
import numpy as np
import cv2
im = cv2.imread('image.png')
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
grayout = 255*np.ones((im.shape[0],im.shape[1],1), np.uint8)
blur = cv2.GaussianBlur(gray,(5,5),1)
thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
wcnt = 0
for item in contours:
area =cv2.contourArea(item)
print wcnt,area
[x,y,w,h] = cv2.boundingRect(item)
if area>10 and area<200:
roi = gray[y:y+h,x:x+w]
cntd = 0
for i in range(x,x+w):
for j in range(y,y+h):
if gray[j,i]==0:
cntd = cntd + 1
density = cntd/(float(h*w))
if density<0.5:
for i in range(x,x+w):
for j in range(y,y+h):
grayout[j,i] = gray[j,i];
wcnt = wcnt + 1
cv2.imwrite('result.png',grayout)
You have to balance two things, removing the black spots but balance that with not losing the contents of what is on the board. The output I got is this:
Here is a Python numpy implementation (using my own mahotas package) of the method for the top answer (almost the same, I think):
import mahotas as mh
import numpy as np
Imported mahotas & numpy with standard abbreviations
im = mh.imread('7Esco.jpg', as_grey=1)
Load the image & convert to gray
im2 = im[::2,::2]
im2 = mh.gaussian_filter(im2, 1.4)
Downsample and blur (for speed and noise removal).
im2 = 255 - im2
Invert the image
mean_filtered = mh.convolve(im2.astype(float), np.ones((9,9))/81.)
Mean filtering is implemented "by hand" with a convolution.
imc = im2 > mean_filtered - 4
You might need to adjust the number 4 here, but it worked well for this image.
mh.imsave('binarized.png', (imc*255).astype(np.uint8))
Convert to 8 bits and save in PNG format.

Categories