I am trying to preprocess a photo of the eye vessels by removing the black border and extraneous non-eye features in the image (Ex. see below for text and "clip") by replacing the black areas with the average pixel values from 3 random squares.
crop1 = randomCrop(image2, 50, 50) #Function that finds random 50x50 area
crop2 = randomCrop(image2, 50, 50)
crop3 = randomCrop(image2, 50, 50)
mean1 = RGB_Mean(crop1)
mean2 = RGB_Mean(crop2)
mean3 = RGB_Mean(crop3)
#RGB Mean
result = [statistics.mean(k) for k in zip(mean1, mean2, mean3)]
for i in range(len(image2[:,0, 0])):
for j in range(len(image2[0,:,0])):
thru_pixel = image2[i, j]
if (thru_pixel[0] < 50 and thru_pixel[1] < 50 and thru_pixel[2] < 50):
image2[i,j, :] = result
if (thru_pixel[0] > 190 and thru_pixel[1] > 190 and thru_pixel[2] > 190):
image2[i,j, :] = result
However, there is leftover noise around the border of the image, as well as leftover text and a clip at the bottom left.
Here's an example image.
Original :
and Post-Processing
You can see there's still left over black-gray border noise as well as text at the bottom right and a "clip" at the bottom left. Is there anything I could try to get rid of any of these artifacts while maintaining the integrity of the eye blood vessels?
Thank you for your time and help!
Assuming you want to isolate the eye blood vessels, here's an approach which can be broken down into two stages, one to remove the artifacts and another to isolate blood vessels
Convert image to grayscale
Otsu's threshold to obtain binary image
Perform morphological operations to remove artifacts
Adaptive threshold to isolate blood vessels
Find contours and filter using maximum threshold area
Bitwise-and to get final result
Beginning from your original image, we convert to grayscale and Otsu's threshold to obtain a binary image
Now we perform morph open to remove the artifacts (left). We inverse this mask to obtain the white border and then do a series of bitwise operations to get the removed artifacts image (right)
From here we adaptive threshold to get the veins
Note there is the unwanted border so we find contours and filter using a maximum threshold area. If a contour passes the filter, we draw it onto a blank mask
Finally we perform bitwise-and on the original image to get our result
Note we could have performed additional morph open after the adaptive threshold to remove the small particles of noise but the tradeoff is that it will remove vein details. I'll leave this optional step up to you
import cv2
import numpy as np
# Grayscale, Otsu's threshold, opening
image = cv2.imread('1.png')
blank_mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15,15))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=3)
inverse = 255 - opening
inverse = cv2.merge([inverse,inverse,inverse])
removed_artifacts = cv2.bitwise_and(image,image,mask=opening)
removed_artifacts = cv2.bitwise_or(removed_artifacts, inverse)
# Isolate blood vessels
veins_gray = cv2.cvtColor(removed_artifacts, cv2.COLOR_BGR2GRAY)
adaptive = cv2.adaptiveThreshold(veins_gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,11,3)
cnts = cv2.findContours(adaptive, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area < 5000:
cv2.drawContours(blank_mask, [c], -1, (255,255,255), 1)
blank_mask = cv2.cvtColor(blank_mask, cv2.COLOR_BGR2GRAY)
final = cv2.bitwise_and(image, image, mask=blank_mask)
# final[blank_mask==0] = (255,255,255) # White version
cv2.imshow('thresh', thresh)
cv2.imshow('opening', opening)
cv2.imshow('removed_artifacts', removed_artifacts)
cv2.imshow('adaptive', adaptive)
cv2.imshow('blank_mask', blank_mask)
cv2.imshow('final', final)
cv2.waitKey()
Related
I wanted to Remove all the texts USING INPAINTING from this IMAGE. I had been trying various methods, and eventually found that I can get the results through OCR and then using thresholding MASK THE IMAGE.
processedImage = preprocess(partOFimg)
mask = np.ones(img.shape[:2], dtype="uint8") * 255
for c in cnts:
cv2.drawContours(mask, [c], -1, 0, -1)
img = cv2.inpaint(img,mask,7,cv2.INPAINT_TELEA)
Preprocess operations:
ret,thresh1 = cv2.threshold(gray, 0, 255,cv2.THRESH_OTSU|cv2.THRESH_BINARY_INV)
rect_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15, 3))
dilation = cv2.dilate(thresh1, rect_kernel, iterations = 1)
edged = cv2.Canny(dilation, 50, 100)
cnts = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
mask =
np.ones(img.shape[:2], dtype="uint8") * 255
When I run the above code, I here am the OUTPUT Image OUTPUT. As we can see, it is making some BLOCKS OF DIFFERENT COLOR over the IMAGE, I want to prevent that, How do I achieve this? I see that mask images are not formed well many times, and in cases when the text is white the PREPROCESSING doesn't occur properly.
How do I prevent these BLOCKS of other colours to FORM on the IMAGE?
Grayed Sub Image GRAYED
Threshold Sub IMG part: Thresholded Image
Masked Image Masked
EDIT 1:
I've managed to get this new better result by noticing that my threshold is the best mask I can get. After doing this I performed the masking process 3 different times with variable masks and inversions. I did the inpainting algorithm 3 times, it basically the other times inverse the mask, because in some cases required mask is the inversed mask. Still I think it needs improvement, If I chose a different image the results are not so good.
Python/OpenCV inpaint methods, generally, are not appropriate to your type of image. They work best on thin (scratch-like) regions, not large blocks. You really need an exemplar type method such as https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/criminisi_tip2004.pdf. But OpenCV does not have that.
However, the OpenCV methods do work here, I suspect, because you are filling with constant colors (green) and not texture. So you are best to try to get the mask of just the letters (characters), not rectangular blocks for the words. So, to show you what I mean, here is my Python/OpenCV approach.
Input:
Read the input
Threshold on the green sign
Apply morphology to close it up and keep as mask1
Apply the mask to the image to blacken out the outside of the sign
Threshold on the white in this new image and keep as mask2
Apply morphology dilate to enlarge it slightly and save as mask3
Do the inpaint
Save the results
import cv2
import numpy as np
# read input
img = cv2.imread('airport_sign.jpg')
# threshold on green sign
lower = (30,80,0)
upper = (70,120,20)
thresh = cv2.inRange(img, lower, upper)
# apply morphology close
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (135,135))
mask1 = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# apply mask to img
img2 = img.copy()
img2[mask1==0] = (0,0,0)
# threshold on white
#gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
#mask2 = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
lower = (120,120,120)
upper = (255,255,255)
mask2 = cv2.inRange(img2, lower, upper)
# apply morphology dilate
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5))
mask3 = cv2.morphologyEx(mask2, cv2.MORPH_DILATE, kernel)
# do inpainting
result1 = cv2.inpaint(img,mask3,11,cv2.INPAINT_TELEA)
result2 = cv2.inpaint(img,mask3,11,cv2.INPAINT_NS)
# save results
cv2.imwrite('airport_sign_mask.png', mask3)
cv2.imwrite('airport_sign_inpainted1.png', result1)
cv2.imwrite('airport_sign_inpainted2.png', result1)
# show results
cv2.imshow('thresh',thresh)
cv2.imshow('mask1',mask1)
cv2.imshow('img2',img2)
cv2.imshow('mask2',mask2)
cv2.imshow('mask3',mask3)
cv2.imshow('result1',result1)
cv2.imshow('result2',result2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Mask 3:
Inpaint 1 (Telea):
Inpaint 2 (NS):
I have the following image, which is a scanned printed paper with 4 images. I printed 4 images in the same sheet of paper to save printing resources:
However, now I need to extract image by image and, for each of them, create an individual image file. Is there any easy way of doing that with Python, Matlab or any other programming language?
Here is one way to do that in Python/OpenCV. But it requires that the pictures colors at their sides be sufficiently different from the background color. If so, you can threshold the image, then get contours and use their bounding boxes to crop out each image.
Read the input
Threshold based on the background color
Invert the threshold, so that the background is black
Apply morphology open and close to fill the picture regions and remove noise
Get the external contours
For each contour, get its bounding box and crop the input image and save it to disk
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('4faces.jpg')
# threshold on background color
lowerBound = (230,230,230)
upperBound = (255,255,255)
thresh = cv2.inRange(img, lowerBound, upperBound)
# invert so background black
thresh = 255 - thresh
# apply morphology to ensure regions are filled and remove extraneous noise
kernel = np.ones((7,7), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
kernel = np.ones((11,11), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# get contours
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
# get bounding boxes and crop input
i = 1
for cntr in contours:
# get bounding boxes
x,y,w,h = cv2.boundingRect(cntr)
crop = img[y:y+h, x:x+w]
cv2.imwrite("4faces_crop_{0}.png".format(i), crop)
i = i + 1
# save threshold
cv2.imwrite("4faces_thresh.png",thresh)
# show thresh and result
cv2.imshow("thresh", thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()
Thresholded image after morphology cleaning:
Cropped Images:
I am working with videos, that have borders (margins) around them. Some have it along all 4 sides, some along left&right only and some along top&bottom only. Length of these margins is also not fixed.
I am extracting frames from these videos, as for example,
and
Both of these contain borders on the top and bottom.
Can anyone please suggest some methods to remove these borders from these images (in Python, preferably).
I came across some methods, like this on Stackoverflow, but this deals with an ideal situation where borders are perfectly black (0,0,0). But in my case, they may not be pitch black, and also may contain jittery noises too.
Any help/suggestions would be highly appreciated.
Here is one way to do that in Python/OpenCV.
Read the image
Convert to grayscale and invert
Threshold
Apply morphology to remove small black or white regions then invert again
Get the contour of the one region
Get the bounding box of that contour
Use numpy slicing to crop that area of the image to form the resulting image
Save the resulting image
import cv2
import numpy as np
# read image
img = cv2.imread('gymnast.png')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# invert gray image
gray = 255 - gray
# gaussian blur
blur = cv2.GaussianBlur(gray, (3,3), 0)
# threshold
thresh = cv2.threshold(blur,236,255,cv2.THRESH_BINARY)[1]
# apply close and open morphology to fill tiny black and white holes
kernel = np.ones((5,5), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
# invert thresh
thresh = 255 -thresh
# get contours (presumably just one around the nonzero pixels)
# then crop it to bounding rectangle
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
cntr = contours[0]
x,y,w,h = cv2.boundingRect(cntr)
crop = img[y:y+h, x:x+w]
cv2.imshow("IMAGE", img)
cv2.imshow("THRESH", thresh)
cv2.imshow("CROP", crop)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save cropped image
cv2.imwrite('gymnast_crop.png',crop)
cv2.imwrite('gymnast_crop.png',crop)
Input:
Thresholded and cleaned image:
Cropped Result:
I have a problem because I have separate registrations for photos. Now I would like to get the registration number from the photo. Unfortunately, the effectiveness of the code I wrote is very low and I would like to ask for help in achieving greater efficiency. any tips?
In the first phase, the photo looks like this
Then transform the photo to gray and only the black color contrasts
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of black color in HSV
lower_val = np.array([0,0,0])
upper_val = np.array([179,100,130])
# Threshold the HSV image to get only black colors
mask = cv2.inRange(hsv, lower_val, upper_val)
receives
What can I add or do to improve the effectiveness of the program. is there a way for the program to retrieve registrations a bit? Will this help
configr = ('-l eng --oem 1 --psm 6-c tessedit_char_whitelist=ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789')
text = pytesseract.image_to_string(mask,lang='eng', config=configr)
print(text)
Here's an approach:
Color threshold to extract black text. We load the image, convert to HSV colorspace, define a lower and upper color range, and use cv2.inRange() to color threshold and obtain a binary mask
Perform morphological operations. Create a kernel and perform morph close to fill holes in the contours.
Filter license plate contours. Find contours and filter using bounding rectangle area. If a contour passes this filter, we extract the ROI and paste it onto a new blank mask.
OCR using Pytesseract. We invert the image so desired text is black and throw it into Pytesseract.
Here's a visualization of each step:
Obtained mask from color thresholding + morph closing
Filter for license plate contours highlighted in green
Pasted plate contours onto a blank mask
Inverted image ready for Tesseract
Result from Tesseract OCR
PZ 689LR
Code
import numpy as np
import pytesseract
import cv2
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Load image, create blank mask, convert to HSV, define thresholds, color threshold
image = cv2.imread('1.png')
result = np.zeros(image.shape, dtype=np.uint8)
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower = np.array([0,0,0])
upper = np.array([179,100,130])
mask = cv2.inRange(hsv, lower, upper)
# Perform morph close and merge for 3-channel ROI extraction
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
close = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel, iterations=1)
extract = cv2.merge([close,close,close])
# Find contours, filter using contour area, and extract using Numpy slicing
cnts = cv2.findContours(close, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
area = w * h
if area < 5000 and area > 2500:
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 3)
result[y:y+h, x:x+w] = extract[y:y+h, x:x+w]
# Invert image and throw into Pytesseract
invert = 255 - result
data = pytesseract.image_to_string(invert, lang='eng',config='--psm 6')
print(data)
cv2.imshow('image', image)
cv2.imshow('close', close)
cv2.imshow('result', result)
cv2.imshow('invert', invert)
cv2.waitKey()
I am trying to find accurate locations for the corners on ink blotches as seen below:
My idea is to fit lines to the edges and then find where they intersect. As of now, I've tried using cv2.approxPolyDP() with various values of epsilon to approximate the edges, however this doesn't look like the way to go. My cv2.approxPolyDP code gives the following result:
Ideally, this is what I want to produce (drawn on paint):
Are there CV functions in place for this sort of problem? I've considered using Gaussian blurring before the threshold step although that method does not seem like it would be very accurate for corner finding. Additionally, I would like this to be robust to rotated images, so filtering for vertical and horizontal lines won't necessarily work without other considerations.
Code:*
import numpy as np
from PIL import ImageGrab
import cv2
def process_image4(original_image): # Douglas-peucker approximation
# Convert to black and white threshold map
gray = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
(thresh, bw) = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
# Convert bw image back to colored so that red, green and blue contour lines are visible, draw contours
modified_image = cv2.cvtColor(bw, cv2.COLOR_GRAY2BGR)
contours, hierarchy = cv2.findContours(bw, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(modified_image, contours, -1, (255, 0, 0), 3)
# Contour approximation
try: # Just to be sure it doesn't crash while testing!
for cnt in contours:
epsilon = 0.005 * cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, epsilon, True)
# cv2.drawContours(modified_image, [approx], -1, (0, 0, 255), 3)
except:
pass
return modified_image
def screen_record():
while(True):
screen = np.array(ImageGrab.grab(bbox=(100, 240, 750, 600)))
image = process_image4(screen)
cv2.imshow('window', image)
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
screen_record()
A note about my code: I'm using screen capture so that I can process these images live. I have a digital microscope that can display live feed on a screen, so the constant screen recording will allow me to sample from the video feed and locate the corners live on the other half of my screen.
Here's a potential solution using thresholding + morphological operations:
Obtain binary image. We load the image, blur with cv2.bilateralFilter(), grayscale, then Otsu's threshold
Morphological operations. We perform a series of morphological open and close to smooth the image and remove noise
Find distorted approximated mask. We find the bounding rectangle coordinates of the object with cv2.arcLength() and cv2.approxPolyDP() then draw this onto a mask
Find corners. We use the Shi-Tomasi Corner Detector already implemented as cv2.goodFeaturesToTrack() for corner detection. Take a look at this for an explanation of each parameter
Here's a visualization of each step:
Binary image -> Morphological operations -> Approximated mask -> Detected corners
Here are the corner coordinates:
(103, 550)
(1241, 536)
Here's the result for the other images
(558, 949)
(558, 347)
Finally for the rotated image
(201, 99)
(619, 168)
Code
import cv2
import numpy as np
# Load image, bilaterial blur, and Otsu's threshold
image = cv2.imread('1.png')
mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.bilateralFilter(gray,9,75,75)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Perform morpholgical operations
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (10,10))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1)
close = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel, iterations=1)
# Find distorted rectangle contour and draw onto a mask
cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
rect = cv2.minAreaRect(cnts[0])
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(image,[box],0,(36,255,12),4)
cv2.fillPoly(mask, [box], (255,255,255))
# Find corners
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
corners = cv2.goodFeaturesToTrack(mask,4,.8,100)
offset = 25
for corner in corners:
x,y = corner.ravel()
cv2.circle(image,(x,y),5,(36,255,12),-1)
x, y = int(x), int(y)
cv2.rectangle(image, (x - offset, y - offset), (x + offset, y + offset), (36,255,12), 3)
print("({}, {})".format(x,y))
cv2.imshow('image', image)
cv2.imshow('thresh', thresh)
cv2.imshow('close', close)
cv2.imshow('mask', mask)
cv2.waitKey()
Note: The idea for the distorted bounding box came from a previous answer in How to find accurate corner positions of a distorted rectangle from blurry image
After seeing the description of the corners, here is what I would recommend:
by any method, find the gross location of the corners (for instance by looking for the extreme values of (±X+±Y, ±X+±Y) or (±X, ±Y)).
consider a strip than joins two corners, with a certain width. Extract the pixels in that strip, on a portion close to the corner, rotate to make it horizontal and average the values along horizontals.
you will obtain a gray profile that tells you the accurate position of the edge, at the mean between the background and foreground intensities.
repeat on all four edges and at both ends. This will give you four accurate corners, by intersection.