OpenCV - Smoothing borders - python

I want to stitch multiple image patches to a new and mainly gray background image. The image patches contain colored elements which shall not be changed, if possible. Their shape and color is diverse. Like the new background image the borders of the image patches are also gray, just slightly different, but you can see strong borders if I just go by
ImgPatch = cv2.imread("C://...//ImagePatch.png")
NewBackground = cv2.imread("C://...//NewBackground.png")
height, width, channels = ImgPatch.shape
NewBackground[y:y+height,x:x+width] = ImgPatch
I tried cv2.seamlessClone() (docs.opencv.org) as explained in this tutorial:
www.learnopencv.com/seamless-cloning-using-opencv-python-cpp
The edges are perfectly smoothed, but unfortunately the colors of the elements are changed way too much. I know the approximate width and height of the gray border of each image patch. If i could specifically smooth that area that may be a start and lets the result look already better than what I have. I tried different masks with cv2.seamlessClone(), of which none of the tried ways workes. So unfortunately I couldn't find a correct way to blend only the border of the patches so far.
The following images visualize my problem in a very abstract way.
What I have:
Left: Background, Right: Image patch
What I want:
What I currently get by using cv2.seamlessClone():
Any help would be very much appreciated!
EDIT As I probably was not clear enough: The real images are way more complex and so unfortunately I can not get reasonable results for all image patches by using cv2.findContour... What I am looking for is a method to merge the borders, so you can not see the exact transition of patch to background anymore.

patch = cv2.imread('patch.png', cv2.IMREAD_UNCHANGED);
image = cv2.imread('image.png', cv2.IMREAD_UNCHANGED);
mask = 255 * np.ones(patch.shape, patch.dtype)
width, height, channels = image.shape
center = (height//2, width//2)
mixed_clone = cv2.seamlessClone(patch, image, mask, center, cv2.cv2.NORMAL_CLONE)

You could try to find contour in your image patch with cv2.findContour() (red spot). Then remove the background of the contour and save the image. You can finally combine the one you saved (red spot without background) with the gray background image with cv2.add(). I have combined some code I once played with and the code in OpenCV docs (for cv2.add()). Hope it helps a bit (Note the example ads the image in upper left corner - if you want elswhere you should change the code). Cheers!
Example:
import cv2
import numpy as np
from PIL import Image
img = cv2.imread('background2.png', cv2.IMREAD_UNCHANGED)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, threshold = cv2.threshold(gray, 100, 255, cv2.THRESH_BINARY_INV)
height,width = gray.shape
mask = np.zeros((height,width), np.uint8)
_, contours, hierarchy = cv2.findContours(threshold,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
cv2.drawContours(mask,[cnt], -1, (255,255,255),thickness=-1)
masked = cv2.bitwise_and(img, img, mask=mask)
_,thresh = cv2.threshold(mask,1,255,cv2.THRESH_BINARY)
contours = cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
x,y,w,h = cv2.boundingRect(contours[0])
circle = masked[y:y+h,x:x+w]
cv2.imwrite('temp.png', circle)
cv2.waitKey(0)
cv2.destroyAllWindows()
img = Image.open('temp.png')
img = img.convert("RGBA")
datas = img.getdata()
newData = []
for item in datas:
if item[0] == 0 and item[1] == 0 and item[2] == 0:
newData.append((255, 255, 255, 0))
else:
newData.append(item)
img.putdata(newData)
img.save('background3.png', "PNG")
img1 = cv2.imread('background1.png')
img2 = cv2.imread('background3.png')
rows,cols,channels = img2.shape
roi = img1[0:rows, 0:cols ]
img2gray = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)
ret, mask = cv2.threshold(img2gray, 110, 255, cv2.THRESH_BINARY_INV)
mask_inv = cv2.bitwise_not(mask)
img1_bg = cv2.bitwise_and(roi,roi,mask = mask_inv)
img2_fg = cv2.bitwise_and(img2,img2,mask = mask)
dst = cv2.add(img1_bg,img2_fg)
img1[0:rows, 0:cols] = dst
cv2.imshow('img',img1)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:

Related

how to detect defect in pcb using opencv python

I am working on project in which i supposed to detect defect in PCB i have already tried image subtraction method. It detects the defect missing components in PCB but it is not giving correct output even when PCB is not defect. it means it show that there
is missing component in PCB even the component is present
here is me code
g_o_img = cv2.cvtColor(o_img, cv2.COLOR_BGR2GRAY)
g_def_img = cv2.cvtColor(def_img, cv2.COLOR_BGR2GRAY)
score, diff = structural_similarity(g_o_img, g_def_img, full=True)
diff1 = (diff*255).astype("uint8")
cv2.imshow("diff", diff1)
ssim = format(score)
print("SSIM : "+ssim)
l_b = np.array([0])
u_b = np.array([50])
mask = cv2.inRange(diff1, l_b, u_b)
kernel = np.ones((3, 3), np.uint8)
erode = cv2.dilate(mask, kernel)
cv2.imshow("mask", mask)
cv2.imshow("erode", erode)
I have done perspective transformation and i am getting very good output but as my images are of different light illuminations so i am not getting proper output using image subtraction here is my code
img1 = cv2.imread("./pcb_images/orig3.png")
img1 = cv2.resize(img1, (640, 480))
image1 = img1[10:-10,10:-10]
print(image1)
g_o_img = cv2.cvtColor(image1, cv2.COLOR_BGR2LAB) [...,0]
img2 = cv2.imread("./pcb_images/defect4.png")
img2 = cv2.resize(img2, (640, 480))
image2 = img2[10:-10,10:-10]
g_def_img = cv2.cvtColor(image2, cv2.COLOR_BGR2LAB)[...,0]
thresh1 = cv2.threshold(g_o_img, 130, 255, cv2.THRESH_BINARY)[1]
thresh2 = cv2.threshold(g_def_img, 130, 255, cv2.THRESH_BINARY)[1]
res = cv2.bitwise_xor(thresh1, thresh2, mask=None)
You have to do some preprocessing before background subtraction. You have to detect the PCB only (without the background) and make sure the perspective is always the same. If the perspective is not the same do some Perspective Transformation to crop the PCB out of the image (Check here). Then convert the image to a Color Space that is more robust to light and shadows (Check here). After that you can do background subtraction followed by Thresholding (Check here )to get a binary image. Finally you can do Morphological Transformations to remove the false positives (Check here) and leave only the differences between the images .
Solution
This is the implementation of the approach explained above. You can experiment with different color spaces, threshold values or morphological kernel sizes to see which will give you better results.
import cv2
import numpy as np
kernel = np.ones((7,7),np.uint8)
img1 = cv2.imread("orig.png")
img1 = cv2.resize(img1, (640, 480))
image1 = img1[10:-10,10:-10]
img2 = cv2.imread("defect.png")
img2 = cv2.resize(img2, (640, 480))
image2 = img2[10:-10,10:-10]
#Changing color space
g_o_img = cv2.cvtColor(image1, cv2.COLOR_BGR2LAB) [...,0]
g_def_img = cv2.cvtColor(image2, cv2.COLOR_BGR2LAB)[...,0]
#Image subtraction
sub =cv2.subtract(g_o_img, g_def_img)
thresh = cv2.threshold(sub , 130, 255, cv2.THRESH_BINARY)[1]
#Morphological opening
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
#Detecting blobs
params = cv2.SimpleBlobDetector_Params()
params.filterByInertia = False
params.filterByConvexity = False
params.filterByCircularity = False
im=cv2.bitwise_not(opening)
detector = cv2.SimpleBlobDetector_create(params)
keypoints = detector.detect(im)
#Drawing circle around blobs
im_with_keypoints = cv2.drawKeypoints(img2, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
#Display image with circle around defect
cv2.imshow('image',im_with_keypoints)
cv2.waitKey(0)
cv2.destroyAllWindows()
Output
Subtraction Result
Binary Image
Defect Image

How improve image quality to extract text from image using Tesseract

I'm trying to use Tessract in the code below to extract the two lines of the image. I tryied to improve the image quality but even though it didn't work.
Can anyone help me?
from PIL import Image, ImageEnhance, ImageFilter
import pytesseract
img = Image.open(r'C:\ocr\test00.jpg')
new_size = tuple(4*x for x in img.size)
img = img.resize(new_size, Image.ANTIALIAS)
img.save(r'C:\\test02.jpg', 'JPEG')
print( pytesseract.image_to_string( img ) )
Given the comment by #barny I don't know if this will work, but you can try the code below. I created a script that selects the display area and warps this into a straight image. Next a threshold to a black and white mask of the characters and the result is cleaned up a bit.
Try if it improves recognition. If it does, also look at the intermediate stages so you'll understand all that happens.
Update: It seems Tesseract prefers black text on white background, inverted and dilated the result.
Result:
Updated result:
Code:
import numpy as np
import cv2
# load image
image = cv2.imread('disp.jpg')
# create grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# perform threshold
retr, mask = cv2.threshold(gray_image, 190, 255, cv2.THRESH_BINARY)
# findcontours
ret, contours, hier = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# select the largest contour
largest_area = 0
for cnt in contours:
if cv2.contourArea(cnt) > largest_area:
cont = cnt
largest_area = cv2.contourArea(cnt)
# find the rectangle (and the cornerpoints of that rectangle) that surrounds the contours / photo
rect = cv2.minAreaRect(cont)
box = cv2.boxPoints(rect)
box = np.int0(box)
#### Warp image to square
# assign cornerpoints of the region of interest
pts1 = np.float32([box[2],box[3],box[1],box[0]])
# provide new coordinates of cornerpoints
pts2 = np.float32([[0,0],[500,0],[0,110],[500,110]])
# determine and apply transformationmatrix
M = cv2.getPerspectiveTransform(pts1,pts2)
tmp = cv2.warpPerspective(image,M,(500,110))
# create grayscale
gray_image2 = cv2.cvtColor(tmp, cv2.COLOR_BGR2GRAY)
# perform threshold
retr, mask2 = cv2.threshold(gray_image2, 160, 255, cv2.THRESH_BINARY_INV)
# remove noise / close gaps
kernel = np.ones((5,5),np.uint8)
result = cv2.morphologyEx(mask2, cv2.MORPH_CLOSE, kernel)
#draw rectangle on original image
cv2.drawContours(image, [box], 0, (255,0,0), 2)
# dilate result to make characters more solid
kernel2 = np.ones((3,3),np.uint8)
result = cv2.dilate(result,kernel2,iterations = 1)
#invert to get black text on white background
result = cv2.bitwise_not(result)
#show image
cv2.imshow("Result", result)
cv2.imshow("Image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()

How to discard the edges of an image using opencv?

I'm pre-processing some images in order to remove the background from my area of ​​interest. However, the images on my bench have rounded edges due to the focus of the camera. How do I discard these rounded edges and be able to remove only my object of interest from the image? The code below I can remove the background of the image, but it does not work right due to the edges around.
import numpy as np
import cv2
#Read the image and perform threshold and get its height and weight
img = cv2.imread('IMD408.bmp')
h, w = img.shape[:2]
# Transform to gray colorspace and blur the image.
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
# Make a fake rectangle arround the image that will seperate the main contour.
cv2.rectangle(blur, (0,0), (w,h), (255,255,255), 10)
# Perform Otsu threshold.
_,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Create a mask for bitwise operation
mask = np.zeros((h, w), np.uint8)
# Search for contours and iterate over contours. Make threshold for size to
# eliminate others.
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for i in contours:
cnt = cv2.contourArea(i)
if 1000000 >cnt > 100000:
cv2.drawContours(mask, [i],-1, 255, -1)
# Perform the bitwise operation.
res = cv2.bitwise_and(img, img, mask=mask)
# Display the result.
cv2.imwrite('IMD408.png', res)
cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
input image:
Exit:
Error:
Since you mentioned that all the images have the same hue, then this should work well for them. Steps is to do some white balancing which will increase the contrast a bit.
Get the greyscale.
Threshold the grayscale image. Values less than 127 are set to 255 (white). This will give you a binary image, which will become a mask for the original image.
Apply the mask
You might have to play around with the thresholding if you want better results, here is the link for that. But this should get you started. I'm using a different OpenCV version compared to you might have to tweak the code a bit.
import cv2
def equaliseWhiteBalance(image):
''' Return equilised WB of an image '''
wb = cv2.xphoto.createSimpleWB() #Create WB Object
imgWB = wb.balanceWhite(img) #Balance White on image
r,g,b = cv2.split(imgWB) #Get individual r,g,b channels
r_equ = cv2.equalizeHist(r) #Equalise RED channel
g_equ = cv2.equalizeHist(g) #Equalise GREEN channel
b_equ = cv2.equalizeHist(b) #Equalise BLUE channel
img_equ_WB = cv2.merge([r_equ,g_equ,b_equ]) #Merge equalised channels
return imgWB
#Read the image
img = cv2.imread('IMD408.bmp')
result = img.copy()
#Get whiteBalance of image
imgWB = equaliseWhiteBalance(img)
cv2.imshow('img', imgWB)
cv2.waitKey(0)
# Get gray image
gray = cv2.cvtColor(imgWB,cv2.COLOR_RGB2GRAY)
cv2.imshow('img', gray)
cv2.waitKey(0)
# Perform threshold
_, thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)
cv2.imshow('img', thresh)
cv2.waitKey(0)
# Apply mask
result[thresh!=0] = (255,255,255)
cv2.imshow('img', result)
cv2.waitKey(0)
If all the dark corner vignettes have different sizes per image, then I suggest looking for centroid of contours on the binary (mask) image. Centroids with a 'short' distance to any corner of your image will be the dark vignettes, so their value can be changed from black to white.

How to improve edge detection and remove background from an image?

I am using the code below to remove the backroung of the images and highlight only my region of interest (ROI), however, the algorithm behaves in a wrong way in some images, discarding the stain (ROI) and deleting along with the background.
import numpy as np
import cv2
#Read the image and perform threshold
img = cv2.imread('photo.bmp')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.medianBlur(gray,5)
_,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
#Search for contours and select the biggest one
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
#Create a new mask for the result image
h, w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
#Draw the contour on the new mask and perform the bitwise operation
cv2.drawContours(mask, [cnt],-1, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
#Display the result
cv2.imwrite('photo.png', res)
#cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
I don't know if I understand correctly because when I run your code I do not get the output you posted (exit). If you would like to obtain only the mole it can't be done by simply thresholding because the mole is too near the border plus if you look at your image closley you will see that it has some sort of frame. However there is a simple way to do this for this image but it may not work in other cases. You can draw a fake border over your image and seperate the ROI from other noise area. Then make a threshold for which contour you wish to display. Cheers!
Example:
#Import all necessery libraries
import numpy as np
import cv2
#Read the image and perform threshold and get its height and weight
img = cv2.imread('moles.png')
h, w = img.shape[:2]
# Transform to gray colorspace and blur the image.
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
# Make a fake rectangle arround the image that will seperate the main contour.
cv2.rectangle(blur, (0,0), (w,h), (255,255,255), 10)
# Perform Otsu threshold.
_,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Create a mask for bitwise operation
mask = np.zeros((h, w), np.uint8)
# Search for contours and iterate over contours. Make threshold for size to
# eliminate others.
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for i in contours:
cnt = cv2.contourArea(i)
if 1000000 >cnt > 100000:
cv2.drawContours(mask, [i],-1, 255, -1)
# Perform the bitwise operation.
res = cv2.bitwise_and(img, img, mask=mask)
# Display the result.
cv2.imwrite('mole_res.jpg', res)
cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:

Extracting Walls from Image in OpenCv

I have the following image of a maze:
Input:
The walls are white and the path is black. How can I extract one of the walls in a separate image in opencv?
This is the output I want
Output:
You can use the concept of flood fill algorithm for this problem.
In your input image notice how the 'walls' are distinct and are neighbored by black pixels (path). When you initialize the algorithm at any one pixel within this 'wall' they will be separated from the rest of the image.
Code:
path = r'C:\Users\Desktop'
filename = 'input.png'
img = cv2.imread(os.path.join(path, filename))
cv2.imshow('Original', img)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 100, 255,cv2.THRESH_BINARY_INV)
cv2.imshow('thresh1', thresh)
im_floodfill = thresh.copy()
h, w = thresh.shape[:2]
mask = np.zeros((h+2, w+2), np.uint8)
cv2.floodFill(im_floodfill, mask, (0,0), 255)
cv2.imshow('im_floodfill', im_floodfill)
cv2.imshow('fin', cv2.bitwise_not(cv2.bitwise_not(im_floodfill) + thresh))
There is another example detailing this function on THIS POST

Categories