How to remove text stamps from images? - python

I have picture like this:
It has text stamps randomly distributed throughout the image file. Some aspect to keep in mind about the image are;
The text in the stamp is always same.
No transparency.
The text font is black thus there's some significant difference in contrast with original text.
So my question is;
How do I find this text stamps? I'm guessing, maybe template matching with tolerance could help?
Although even if I found the exact location of the text, how do I get rid it? I could try to figure out the random background and do something like I've mentioned as follows;
Get the bounding box of the text stamp contour.
Then take all pixels outside of the contour.
Removing the contour and filling with random pixels from previous step and adding some blur should do the trick as I'm expecting.

The following code removes the stamp from your image:
inp_img = cv2.imread('stamp.jpg',cv2.IMREAD_GRAYSCALE)
th,inp_img_thresh = cv2.threshold(255-inp_img,220,255,cv2.THRESH_BINARY)
dilate = cv2.dilate(inp_img_thresh,np.ones((5,5),np.uint8))
canny = cv2.Canny(dilate,0,255)
_,contours,_ =
cv2.findContours(canny,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
test_img = inp_img.copy()
for c in contours:
(x, y, w, h) = cv2.boundingRect(c)
#print(x,y,w,h,test_img[y+h//2,x-w])
test_img[y+3:y-2+h,x+3:x+w] = 240 #test_img[y+h//2,x-w]
cv2.imwrite("stamp_removed.jpg",test_img)
cv2.imshow("input image",inp_img)
cv2.imshow("threshold",inp_img_thresh)
cv2.imshow("output image",test_img)
cv2.waitKey(0)
cv2.destroyAllWindows()

Related

OpenCV: Reducing "fuzzy maxima" into single points

I have used:
result = cv2.matchTemplate(frame, template, cv2.TM_CCORR_NORMED)
to generate this output:
I need a list of (x, y) tuples at each of the local maxima (bright spots) in the result. Simply finding all points above a threshold doesn't work, since there are many such points around each maximum.
I can guarantee the minimum distance between any two maxima, which ought to help speed things up.
Is there an efficient technique for doing this?
(P.S.: this is cross-posted from https://forum.opencv.org/t/locating-local-maximums/1534)
update
Based on an excellent suggestion by Michael Lee, I've added skeletonizing to the thresholded image. It's close, but the skeletonizing still has many "worms" rather than single points. My processing flow is as follows:
# read the image
im = cv.imread("image.png", cv.IMREAD_GRAYSCALE)
# apply thresholding
ret, im2 = cv.threshold(im, args.threshold, 255, cv.THRESH_BINARY)
# dilate the thresholded image to eliminate "pinholes"
im3 = cv.dilate(im2, None, iterations=2)
# skeletonize the result
im4 = cv.ximgproc.thinning(im3, None, cv.ximgproc.THINNING_ZHANGSUEN)
# print the number of points found
x, y = np.nonzero(im5)
print(x.shape)
# => 1208
This is a step in the right direction, but there should be more like 220 points, not 1208.
Here are the intermediate results. As you can see in the last picture (skeletonized), there are still lots of little "worms" rather than single point. Is there a better approach?
Thresholded:
Dilated:
Skeletonized:
Update 2/14: Seems like skeletonization only took you part of the way there. Here's a better solution which I believe should get you the rest of the way. Here's how you would do it in scikit-image - maybe you can find the analog in OpenCV - seems like cv2.findContours would be a good start.
# mask is the thresholded image (before or after dilation should work, no skeletonization.
from skimage.measure import label, regionprops
labeled_image = label(mask)
output_points = [region.centroid for region in regionprops(labeled_image)]
Explanation: Label will convert your binary image into a labeled image, where each mask has a different integer value. Then, regionprops uses these labels in order to separate each mask, from which we can use the centroid property to compute the middle point from each - this is guaranteed to be a single point.
Simply finding all points above a threshold doesn't work, since there
are many such points around each maximum.
Actually, this does work - as long as you apply one more processing step. After thresholding, then we want to skeletonize. Scikit-image has a good function to achieve that here, which should give you a binary mask with single points.
Afterwards, you're probably going to want to run something like:
indices = zip(*np.where(skeleton))
to get your final points!
Based on Michael Lee's answer, here's the solution that worked for me (using all openCV rather than skimage):
# read in color image and create a grayscale copy
im = cv.imread("image.png")
img = cv.cvtColor(im, cv.COLOR_BGR2GRAY)
# apply thresholding
ret, im2 = cv.threshold(img, args.threshold, 255, cv.THRESH_BINARY)
# dilate the thresholded peaks to eliminate "pinholes"
im3 = cv.dilate(im2, None, iterations=2)
contours, hier = cv.findContours(im3, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
print('found', len(contours), 'contours')
# draw a bounding box around each contour
for contour in contours:
x,y,w,h = cv.boundingRect(contour)
cv.rectangle(im, (x,y), (x+w,y+h), (255,0,0), 2)
cv.imshow('Contours', im)
cv.waitKey()
which results in just what we're looking for:

Is there another way to fill the area outside a rotated image with white color? 'fillcolor' does not work with older versions of Python

I want to rotate a black and white image. I am trying to use the rotate function as follows:
image.rotate(angle, fillcolor=255)
I am required to older versions of Python and Pillow, and they do not support the 'fillcolor' argument. I cannot upgrade to the newer versions due to certain restrictions and cannot use any external libraries.
Is there another way to fill the area outside the rotated image with white color using Pillow?
Rotated image has black color in the area outside the rotated part. I want to fill it with white color.
Original : Original image
Rotated :Rotated image
You can try Interpolating the Original Image, with the cropped one via Image.composite() to get rid of the black bars/borders.
from PIL import Image
img = Image.open(r"Image_Path").convert("RGBA")
angle = 30
img = img.rotate(angle)
new_img = Image.new('RGBA', img.size, 'white')
Alpha_Image = Image.composite(img, new_img, img)
Alpha_Image = Alpha_Image.convert(img.mode)
Alpha_Image.show()
The above code takes in an Image, converts it into mode RGBA (Alpha is required for this process), and then rotates the Image by 30 degrees. After that It creates a empty Image object of mode RGBA of the same dimensions as the original image, with each pixel having a default value of 255 each channel (i.e Pure white for RGB, and Full Opacity in the context of Alpha/Transparency). Then Interpolates the original image with this empty one using the mask of original Image (we are using the transparency mask of the first image). This results in the Desired images, where black bars/edges are replaced by white. In the end we convert the image color space to the original one.
ORIGINAL IMAGE:-
IMAGE AFTER ROTATING 30 DEGREES:-
An awkward option that has always worked for me, seeing as with my tools I always get a light gray "border" around the rotated image that interferes with filling:
add a border on the non-rotated image and use the fill color with that border.
The bordering operation is lossless and filling will be exact (and easy).
rotate the bordered image. The seam will now also be correct (but not exact unless you
rotate by 45° or 90°).
calculate the size of the rotated border using trigonometry. The result will not be exact (i.e. "131.12 pixel"). Usually you can do this in reverse, starting with an exact border on the rotated image and calculating the border you need to add, and adjust the border width so that the nonrotated border is exact. Example: with a rotated border of 170 pixels you get a nonrotated border of 140.3394 pixels. So you use a 510 pixel rotated border, resulting in the need to add a 421.018 pixel nonrotated border. This is close enough to 421 pixels that it is acceptable.
remove the rotated border.
This also helps avoiding some artefacts near the cut parts of the image that fall off the rotated image.
It has the drawback that you end up with a more massive rotation, with higher memory expenditure and computation time, especially if you use larger borders to increase precision.
Edit: As no external libraries are allowed, I would suggest cropping the rectangle you want and pasting it onto the original image, this could be done with magic numbers (of the rectangle's coordinates), this works for me (you might will need to tweek a little)
im = Image.open("mFul4.png")
rotated = im.rotate(105)
box = (55, 65,200,210)
d = rotated.crop(box=box)
im.paste(d, box=box)
im.save("ex.bmp" )
and the output
Edit2: This is the ugliest way, but it works, you might need to tweak the magic numbers a bit to have it more precise, I was working on your given image, so couldn't tell when i'm overdoing it. It produces the same output
from PIL import Image
im = Image.open("mFul4.png")
angle=105
cos = 0.240959049 # -cos(angle)
d = im.rotate(angle)
pix = d.load()
tri_x = 120
for i in range(4): # 4 triangles
for j in range(tri_x, -1, -1):
for k in range(int((tri_x-j)*cos)+1, -1, -1):
x,y =( j, k )if i <1 else (d.size[0]-j-1, d.size[1]-k-1)
if i in [2,3]:
y, x = (d.size[0] - j-2 , k) if i <3 else (j, d.size[1] - k)
pix[x,y] = (255, 255, 255, 255)
d.show()

how to draw outlines of objects on an image to a separate image

i am working on a puzzle, my final task here is to identify edge type of the puzzle piece.
as shown in the above image i have mange to rotate and crop out every edge of the piece in same angle. my next step is to separate the edge line into a separate image like as shown in the image bellow
then to fill up one side of the line with with a color and try to process it to decide what type of edge it is.
i dont see a proper way to separate the edge line from the image for now.
my approach::
one way to do is scan pixel by pixel and find the black pixels where there is a nun black pixel next to it. this is a code that i can implement. but it feels like a primitive and a time consuming approach.
so if there you can offer any help or ideas, or any completely different way to detect the hollows and humps.
thanks in advance..
First convert your color image to grayscale. Then apply a threshold, say zero to obtain a binary image. You may have to use morphological operations to further process the binary image if there are holes. Then find the contours of this image and draw them to a new image.
A simple code is given below, using opencv 4.0.1 in python 2.7.
bgr = cv2.imread('puzzle.png')
gray = cv2.cvtColor(bgr, cv2.COLOR_BGR2GRAY)
_, roi = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)
cv2.imwrite('/home/dhanushka/stack/roi.png', roi)
cont = cv2.findContours(roi, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
output = np.zeros(gray.shape, dtype=np.uint8)
cv2.drawContours(output, cont[0], -1, (255, 255, 255))
# removing boundary
boundary = 255*np.ones(gray.shape, dtype=np.uint8)
boundary[1:boundary.shape[0]-1, 1:boundary.shape[1]-1] = 0
toremove = output & boundary
output = output ^ toremove

how to center MRI images

I work on MRIs. The problem is that the images are not always centered. In addition, there are often black bands around the patient's body.
I would like to be able to remove the black borders and center the patient's body like this:
I have already tried to determine the edges of the patient's body by reading the pixel table but I haven't come up with anything very conclusive.
In fact my solution works on only 50% of the images... I don't see any other way to do it...
Development environment: Python3.7 + OpenCV3.4
I'm not sure this is the standard or most efficient way to do this, but it seems to work:
# Load image as grayscale (since it's b&w to start with)
im = cv2.imread('im.jpg', cv2.IMREAD_GRAYSCALE)
# Threshold it. I tried a few pixel values, and got something reasonable at min = 5
_,thresh = cv2.threshold(im,5,255,cv2.THRESH_BINARY)
# Find contours:
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# Put all contours together and reshape to (_,2).
# The first "column" will be your x values of your contours, and second will be y values
c = np.vstack(contours).reshape(-1,2)
# Extract the most left, most right, uppermost and lowermost point
xmin = np.min(c[:,0])
ymin = np.min(c[:,1])
xmax = np.max(c[:,0])
ymax = np.max(c[:,1])
# Use those as a guide of where to crop your image
crop = im[ymin:ymax, xmin:xmax]
cv2.imwrite('cropped.jpg', crop)
What you get in the end is this:
There are multiple ways to do this, and this is answer is pretty much computer vision tips and tricks.
If the mass is in the center, and the area outside is always going to be black, you can threshold the image and then find the edge pixels like you already are. I'd add 10 pixels to the border to adjust for variances in the threshold process.
Or if the body is always similarly sized, you can find the centroid of the blob (white area in the threshold image), and then crop a fixed area around it.

Rectangular bounding boxes around objects in monochrome images in python?

I have a set of two monochrome images [attached] where I want to put rectangular bounding boxes for both the persons in each image. I understand that cv2.dilate may help, but most of the examples I see are focusing on detecting one rectangle containing the maximum pixel intensities, so essentially they put one big rectangle in the image. I would like to have two separate rectangles.
UPDATE:
This is my attempt:
import numpy as np
import cv2
im = cv2.imread('splinet.png',0)
print im.shape
kernel = np.ones((50,50),np.uint8)
dilate = cv2.dilate(im,kernel,iterations = 10)
ret,thresh = cv2.threshold(im,127,255,0)
im3,contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
plt.imshow(im,cmap='Greys_r')
#plt.imshow(im3,cmap='Greys_r')
for i in range(0, len(contours)):
if (i % 2 == 0):
cnt = contours[i]
#mask = np.zeros(im2.shape,np.uint8)
#cv2.drawContours(mask,[cnt],0,255,-1)
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(im,(x,y),(x+w,y+h),(255,255,0),5)
plt.imshow(im,cmap='Greys_r')
cv2.imwrite(str(i)+'.png', im)
cv2.destroyAllWindows()
And the output is attached below: As you see, small boxes are being made and its not super clear too.
The real problem in your question lies in selection of the optimal threshold from the monochrome image.
In order to do that, calculate the median of the gray scale image (the second image in your post). The threshold level will be set 33% above this median value. Any value below this threshold will be binarized.
This is what I got:
Now performing morphological dilation followed by contour operations you can highlight your region of interest with a rectangle.
Note:
Never set a manual threshold as you did. Threshold can vary for different images. Hence always opt for a threshold based on the median of the image.

Categories