how to inspect image using python - python

i want to match 2 image and detect the similarity.
i am trying using color filter concept can any one help me out which method should i follow.
i want to detect the color pattern in the image.
import cv2
import numpy as np
img = cv2.imread("img.jpg")
hsv=cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
img1 = cv2.imread("img1.jpg")
img1=cv2.cvtColor(img1, cv2.COLOR_BGR2HSV)
lower_red = np.array([60,60,60])
upper_red=np.array([250,250,250])
mask=cv2.inRange(hsv, lower_red, upper_red)
res = cv2.bitwise_and(img, img1, mask = mask)
#cv2.imshow('frame', img)
#cv2.imshow('mask', mask)
cv2.imshow('img', res)
can anyone suggest me which method can i use.

Refer this link for pil Link
import Image
import ImageChops
im1 = Image.open("splash.png")
im2 = Image.open("splash2.png")
diff = ImageChops.difference(im2, im1)
also refer these existing questions posted in tensorflow
Checking images for similarity with OpenCV
Simple and fast method to compare images for similarity

Related

How to Enhance Image contrast and brightness

I have a input image and i want to improve contrast and brightness
input image:
After run my code result is:
But finally image i want(Clarity filter in Windows 10 Photos app in edit mode you can use this link for more details https://www.digitalunite.com/technology-guides/digital-photography/editing-photos-and-videos-windows-10-using-photos-app):
(more details and better separation of objects and color)
My code is:
import cv2
import numpy as np
import skimage.filters as filters
from PIL import ImageFilter,ImageEnhance
# read the image
img = cv2.imread('input.png')
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# blur
smooth = cv2.GaussianBlur(gray, (95,95),0)
# divide gray by morphology image
division = cv2.divide(gray, smooth, scale=40)
# sharpen using unsharp masking
result = filters.unsharp_mask(division, radius=2.5, amount=1, multichannel=False, preserve_range=False)
result = (220*result).clip(0,220).astype(np.uint8)
cv2.imshow('smooth', result)
cv2.waitKey(0)
cv2.destroyAllWindows()

Applying original image pixels wherever the thresholded image is black in OpenCV/numpy?

I've applied some morphological operations on a thresholded image and now I want to convert it back to the original image, but only wherever the image is black. Here's some example pseudocode of what I'm trying to do:
import cv2
import numpy as np
img = cv2.imread("img01.jpg")
empty_image = np.zeros([img.width,img.height,3],dtype=np.uint8)
grey = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
ret,thresh1 = cv2.threshold(grey,125,255,cv2.THRESH_BINARY)
kernel = np.ones((5, 5), np.uint8)
mask = cv2.erode(thresh1, kernel, iterations=2)
mask = cv2.dilate(mask, kernel, iterations=2)
for(i in mask):
if(mask[i]>0]:
empty_image[i]=img[i]
In other terms: How can I restore parts of an original image to parts of a thresholded image?
After find the mask just use result = cv2.bitwise_and(img,img,mask=mask) with no need to declare an empty image, which is called mask process.
Another way is to use boolean indexing as img[mask==0] = 0 to make every image pixel zero (black) if it's black in the mask.
And that is the result:
This link for useful example to understand bitwise_and and other simple related operations in opencv docs.

How can I replace `plt.imsave` with `cmap` option set to `gray` with opencv operations?

This is the source image I am working with:
I am using this github repository (the file I'm using is tools/test_lanenet.py) to do binary lane segmentation. now I get this image:
The second image is actually an image resulted from this command:
# this line results in an array with the shape of (512, 256). this is just a hypothetical line of code. what I really care is the line which saves the image with matplotlib library.
binary_seg_image = lane_segmenter.binary_segment()
# this line saves the image
plt.imsave('binary_image_plt.png', binary_seg_image[0] * 255, cmap='gray')
First I have to do the same operation with opencv module and preferably faster.
In next operation I have to map the lanes segmented in second image on the source image road lanes. I think I have to use the second image as mask and use cv2.bitwise_andto do job right? Can anybody help me?
thank you guys
If you want to color the image where the mask exists, then this is one way using Python/OpenCV. In place of bitwise_and, you simply have to do numpy coloring where the mask is white. Note again, your images are not the same size and I do not know how best to align them. I leave that to you. I am using your two input images as in my other answer. The code is nearly the same.
import cv2
import numpy as np
# read image
img = cv2.imread('road.png')
ht, wd, cc = img.shape
print(img.shape)
# read mask as grayscale
gray = cv2.imread('road_mask.png', cv2.IMREAD_GRAYSCALE)
hh, ww = gray.shape
print(gray.shape)
# get minimum dimensions
hm = min(ht, hh)
wm = min(wd, ww)
print(hm, wm)
# crop img and gray to min dimensions
img = img[0:hm, 0:wm]
gray = gray[0:hm, 0:wm]
# threshold gray as mask
thresh = cv2.threshold(gray,128,255,cv2.THRESH_BINARY)[1]
print(thresh.shape)
# apply mask to color image
result = img.copy()
result[thresh==255] = (0,0,255)
cv2.imshow('image', img)
cv2.imshow('gray', gray)
cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save results
cv2.imwrite('road_colored_by_mask.png', result)
Your images are not the same size. To mask the black/white image onto the color image, they need to align. I tried to simply crop them to the same minimum dimensions at the top left corner, but that did not align them properly.
However, this Python/OpenCV code will give you some idea how to start once you figure out how to align them.
Color Input:
B/W Lane Image:
import cv2
import numpy as np
# read image
img = cv2.imread('road.png')
ht, wd, cc = img.shape
print(img.shape)
# read mask as grayscale
gray = cv2.imread('road_mask.png', cv2.IMREAD_GRAYSCALE)
hh, ww = gray.shape
print(gray.shape)
# get minimum dimensions
hm = min(ht, hh)
wm = min(wd, ww)
print(hm, wm)
# crop img and gray to min dimensions
img = img[0:hm, 0:wm]
gray = gray[0:hm, 0:wm]
# threshold gray as mask
thresh = cv2.threshold(gray,128,255,cv2.THRESH_BINARY)[1]
print(thresh.shape)
# make thresh 3 channels as mask
mask = cv2.merge((thresh, thresh, thresh))
# apply mask to color image
result = cv2.bitwise_and(img, mask)
cv2.imshow('image', img)
cv2.imshow('gray', gray)
cv2.imshow('thresh', thresh)
cv2.imshow('mask', mask)
cv2.imshow('masked image', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save results
cv2.imwrite('road_masked_on_black.png', result)

Detect a scratch on noise image with OpenCV

I am trying to get a small scratch from the noise image as shown. It is quite noticeable by eyes, but I would like to identify it using OpenCV Python.
I tried to use image blurring and subtract it from the original image, and then threshold to get a image as the second.
Could any body please advise to get this scratch?
Original image:
Image after blurring, subtraction, and threshold:
This is how I process this image:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread("scratch0.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.blur(gray,(71,71))
diff = cv2.subtract(blur, gray)
ret, th = cv2.threshold(diff, 13, 255, cv2.THRESH_BINARY_INV)
cv2.imshow("threshold", th)
cv2.waitKey(0)

Extracting components from an edge image and storing for further processing

1Input
Given an edge image, I want to retrieve components in it one by one and store each component as an image so that I can use it later for processing. I guess this is called connected component labeling.
For example, in the input image, there are 2 lines , 1 circle ,2 curves
I want to 5 image files containing these 5 components.
I was able to come up with code as below, but I do not know how to proceed further. Currently I am getting all the components coloured in different colours in output.
import scipy
from skimage import io
from scipy import ndimage
import matplotlib.pyplot as plt
import cv2
import numpy as np
fname='..//Desktop//test1.png'
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#canny
img_canny = cv2.Canny(img,100,200)
threshold = 50
# find connected components
labeled, nr_objects = ndimage.label(img_canny)
print('Number of objects is %d'% nr_objects)
plt.imsave('..//Desktop//out.png', labeled)
Output
New output
You may not need to use cv2.canny() to segment the contours, you can simply use binary thresholding technique as:
img = cv2.imread("/path/to/img.png")
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(img_gray, 130, 255, cv2.THRESH_BINARY_INV)
# Opencv v3.x
im, contours, hierarchy = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for i in xrange(len(contours)):
rect = cv2.boundingRect(contours[i])
contour_component = img[rect[1]:rect[1] + rect[3], rect[0]:rect[0] + rect[2]]
cv2.imwrite("component_{}.png".format(i), contour_component)
This:
num=nr_objects
i=0
while i<num:
plt.imshow(labeled)
i=i+1
Does not loop over different labels, it just shows the same image num times. You need to do something like:
for i in range(num):
tmp = np.zeros(labeled.shape)
tmp[labeled == i] = 255
plt.imshow(tmp)
Then you will see one for each label. Also you can use a for loop... If you have any questions leave a comment.

Categories