Python: Blur specific region in an image - python

I'm trying to blur around specific regions in a 2D image (the data is an array of size m x n).
The points are specified by an m x n mask. cv2 and scikit avaiable.
I tried:
Simply applying blur filters to the masked image. But that isn't not working.
Extracting the points to blur by np.nan the rest, blurring and reassembling. Also not working, because the blur obviously needs the surrounding points to work correctly.
Any ideas?
Cheers

What was the result in the first case? It sounds like a good approach. What did you expect and what you get?
You can also try something like that:
Either create a copy of a whole image or just slightly bigger ROI (to include samples that will be used for blurring)
Apply blur on the created image
Apply masks on two images (from original image take everything except ROI and from blurred image take ROI)
Add two masked images
If you want more smooth transition make sure that masks aren't binary. You can smooth them using another blurring (blur one mask and create the second one by calculating: mask2 = 1 - mask1. By doing so you will be sure that weights always add up to one).

Related

Python / Pillow cut image with larger mask

To create clean isometric tiles, I want to cut everything outside the mask.
So far I have
from PIL import Image
img = Image.open('grass.png')
mask = Image.open('mask.png').convert('L')
img.putalpha(mask)
img.save('result.png')
Input
Mask
Result
Expected Result
It successfully cut the bottom left+right edges, but I've got the top part colored black, which I also want transparent. Thus, I only want to cut the parts of the input image where it exceeds the mask.
Of course in that specific case I could have just created a mask for a bottom tile but as I have many different ones, I want to have a generic mask. I then thought about just removing black pixels afterwards, but there may be black pixels in my input images too, so this is also not a good option.
I have found a couple of similar questions here but those only cover cutting masks which are smaller than the input image not bigger which makes the difference in this case.

Merging three greyscale images into one rgb image

I have three greyscale masks generated by OpenCV that filter in three specific colors. I want to be able to quickly merge them without looping through every pixel in the image (my application requires it to run in real-time) and get an output similar to this:
I've been able to create the three masks separately, but they still need to be combined into one image, where each mask represents a different channel. The first mask would be the red channel, the second would be green, and the third blue.
Clarification: The masks are basically 1/3 of the final image I want to create. I need a way to interpolate them so that they don't end up being the same color in the output and becoming incomprehensible.
More details:
I want to avoid using lots of loops since the current filter takes 4 seconds to process a 272 by 154 image. The masks are just masks created using the cv2.inRange function.
I'm not very good with using numpy or OpenCV yet so any solution that can run reasonably fast (if it can process 15-20 fps it's totally usable) would be of great help.
As #Rotem has said, using cv2.merge to combine the three matrices into one bgr image seems to be one of the best solutions. It's really fast. Thanks!
bgr = cv2.merge((b, g, r))
I don't know how I didn't see it while reading the documentation. Oh well.
Another way I used once
def merge_images(img1, img2, img3):
img1 = cv2.cvtColor(img1, cv2.COLOR_GRAY2RGB)
img2 = cv2.cvtColor(img2, cv2.COLOR_GRAY2RGB)
img3 = cv2.cvtColor(img3, cv2.COLOR_GRAY2RGB)
img = np.concatenate((img1, img2, img3), axis=1)
return img

Extract Data from an Image with Python/OpenCV/Tesseract?

I'm trying to extract some contents from a cropped image. I tried pytesseract and opencv template matching but the results are very poor. OpenCV template matching sometimes fails due to poor quality of the icons and tesseract gives me a line of text with false characters.
I'm trying to grab the values like this:
0:26 83 1 1
Any thoughts or techniques?
A technique you could use would be to blur your image. From what it looks like, the image is kind of low res and blurry already, so you wouldn't need to blur the image super hard. Whenever I need to use a blur function in Opencv, I normally choose the gaussian blur, as its technique of blurring each pixel as well as each surrounding pixel is great. Once the image is blurred, I would threshold, or adaptive threshold the image. Once you have gotten this far, the image that should be shown should be mostly hard lines with little bits of short lines mixed between. Afterwards, dilate the threshold image just enough to have the bits where there are a lot of hard edges connect. Once a dilate has been performed, find the contours of that image, and sort based on their height with the image. Since I assume the position of those numbers wont change, you will only have to sort your contours based on the height of the image. Afterwards, once you have sorted your contours, just create bounding boxes over them, and read the text from there.
However, if you want to do this the quick and dirty way, you can always just manually create your own ROI's around each area you want to read and do it that way.
First Method
Gaussian blur the image
Threshold the image
Dilate the image
Find Contours
Sort Contours based on height
Create bounding boxes around relevent contours
Second Method
Manually create ROI's around the area you want to read text from

Detect rectanglular signature fields in document scans using OpenCV

I am trying to extract rectangular big boxes from document images with signatures in it. Since i don't have training data (for deep learning), i want to cut rectangular boxes (3 in all images) from these images using OpenCV.
Here is what I tried:
import numpy as np
import cv2
img = cv2.imread('S-0330-444-20012800.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,127,255,1)
contours,h = cv2.findContours(thresh,1,2)
for cnt in contours:
approx = cv2.approxPolyDP(cnt,0.02*cv2.arcLength(cnt,True),True)
if len(approx)==4:
cv2.drawContours(img,[cnt],0,(26,60,232),-1)
cv2.imshow('img',img)
cv2.waitKey(0)
sample image
With the above code, I get a lot of squares (around 152 small points like squares) and of course not the 3 boxes.
Replies appreciated. [sample image is attached]
I would suggest you read up on template matching. There is also a good OpenCV tutorial on this.
For your use case, the idea would be to generate a stereotyped image of a rectangular box with the same shape (width/height ratio) as the boxes found on your documents. Depending on whether your input images show the document always in the same scaling or not, your would need to either resize the inputs to keep their magnification constant, or you would need to operate with a template bank (e.g. an array of box templates in various scalings).
Briefly, you would then cross-correlate the template box(es) with the input image and (in case of well-matched scaling) would find ideally relatively sharp peaks indicating the centers of your document boxes.
In the code above, use image pyramids (to merge unwanted contour noises) and cv2.findContours in combination. Post to that filtering list of Contours based on contour area cv2.contourArea will lead to only bigger squares.
There is also an alternate solution. Looking at images, we can see that the signature text usually is bigger than that of printed text in that ROI. so we can filter out contours smaller than signature contours and extract only the signature.
Its always good to remove noise before using cv2.findContours e.g. dilate, erode, blurring etc.

Check for areas that are too thin in an image

I am trying to validate black and white images (more of a clipart images - not photos) for an engraving machine.
One of the major things I need to take into consideration is the size of areas (or width of lines) since the machine can't handle lines that are too thin - so I need to find areas that are thinner than a given threshold.
Take this image for example:
The strings of the harp might be too thin to engrave.
I am reading about Matlab and OpenCV but image processing is an area I am learning about for the first time.
I am a Java / C# developer so implementation done with one of those languages will be best for me but any direction will be greatly appreciated.
A solution using matlab utilizing image morphological operations:
Define the minimal thickness of allowed area, for example, minThick=4
BW = imread('http://i.stack.imgur.com/oXKep.jpg');
BW = BW(:,:,1) < 128; %// convert image to binary mask
se = strel('disk', minTick/2, 0); %// define a disk element
eBW = imerode( BW, se ); %// "chop" half thickness from mask
deBW = imdilate( eBW, se ); %// dilate the eroded mask
Eroding and dilating should leave regions that are thicker than minThick unchanged, but it will remove the thin areas
invalidArea = BW & ~deBW; %// pixels that are in BW but not in deBW
Resulting with:
You can read more about imdilate and imerode in the linked documentation.
This is primarily for self-containment, but this is the equivalent code to what #Shai has performed in Python. I used the numpy and OpenCV packages from Python. The equivalent code to doing it in Python would simply be this:
import numpy as np # Import numpy package
import cv2 # Import OpenCV package
orig = cv2.imread('oXKep.jpg') # Read in image from disk
BW = orig[:,:,2] < 128 # Threshold below 128 to invert image
minThick = 5 # Define minimum thickness
se = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (minThick,minThick)) # define a disk element
finalBW = 255*cv2.morphologyEx(BW.astype('uint8'), cv2.MORPH_OPEN, se) # "chop" half thickness from mask and dilate the eroded mask
# Find invalid area
invalidArea = 255*np.logical_and(BW, np.logical_not(finalBW)).astype('uint8')
# Show original image
cv2.imshow('Original', orig)
# Show opened result
cv2.imshow('Final', finalBW)
# Show invalid lines
cv2.imshow('Invalid Area', invalidArea)
# Wait for user input then close windows
cv2.waitKey(0)
cv2.destroyAllWindows()
A few intricacies that I need to point out:
OpenCV's imread function reads in colour channels in reverse order with respect to MATLAB. Specifically, the channels are read in with a blue-green-red order. This means that the first channel is blue, the second channel green and third channel red. In MATLAB, these are read in proper RGB order. Because this is a grayscale image, the RGB components are the same so it really doesn't matter which channel you use. However, in order to be consistent with Shai's method, the red channel is being accessed and so we need to access the last channel of the image through OpenCV.
The disk structuring element in MATLAB with a structure number of 0 is essentially a diamond shape. However, because OpenCV does not have this structuring element built-in, and I wanted to produce the minimum amount of code possible to get something going, the closest thing I could use was the elliptical shaped structuring element.
In order for the structuring element to be symmetric, you need to make sure that the size is odd, so I changed the size from 4 to 5 from Shai's example.
In order to show an image using OpenCV Python, the image must be at least an unsigned 8-bit integer type. Binary images for display using OpenCV are not supported, and so I artificially made the binary images uint8 and multiplied the values by 255 before displaying them.
You can combine the erosion and dilation operations into one operation using morphological opening. Opening seeks to remove thin lines or disconnect objects that are thinly connected but maintain the shape of the original more larger objects. This is the the point of eroding first so that you can remove these lines but you will shrink the objects a bit in terms of the area, then dilating after so that you can restore the shapes back to their original size (mostly). I exploited that by performing a morphological opening via cv2.morphologyEx.
This is what I get:

Categories