numpy.maximum.reduce not returning maximum value - python

Hi I am trying to find the maximum of binary masks for which I have used numpy.maximum.reduce. I have a set of binary masks and in order to find the maximum from all the masks(which according to me points to the most edges in the images) and avoid overlapping. Therefore I used numpy.maximum.reduce to find the max of all the binary pixels. But the result is not as expected. It is not showing the white regions as maximum and taking the gray regions for the final output mask(max). And to get the mask I have performed dilation and blurring of edge detected images. So will that affect the final result(meaning comparing the masks to find the most edges)?
Q1. Is it the right approach to find the maximum of the masks and avoid overlapping regions?
Q2. Will it return the regions with most edges?
Q3. Should I use the masks or the original edge detected images for getting the most edges?
Results of combining masks
Individual Masked Images
for file in glob.glob("images/*.jpg")):
img = cv2.imread(file)
#edge detection
canny = auto_canny(img)
#Dilation(Morphological function to increase edge width)
img_dilate = cv2.dilate(canny, (3,3), iterations = 1)
#Gaussian Blur to blur the edges to remove noise
mask= ndimage.gaussian_filter(img_dilate, sigma=5)
mask[mask< 30] = 0
mask[(mask>= 30) & (mask< 70)] = 30
mask[(mask>= 70) & (mask< 110)] = 110
mask[mask> 110] = 255
# Retrieve regions from original images
res = cv2.bitwise_and(img, img, mask = mask)
mask_inv= cv2.bitwise_not(mask)
#adding the extracted regions to background image
main_ = cv2.bitwise_and(im_1, im_1, mask = mask_inv)
result = cv2.add(main_, res)
cv2.imshow("result",mask)
cv2.waitKey(0)
image_1.append(mask) #list of masks
image_2.append(res) #list of all extracted regions
for i in range(0,len(image_1)):
max_ = np.maximum.reduce([image_1[i]])
cv2.imshow("max",max_)
cv2.waitKey(0)
Update
max_img = np.zeros((image_1[0].shape[:3]),dtype=np.uint8)
max_img = np.maximum(max_img,image_1)
cv2.imshow("max",max_img)
cv2.waitKey(0)
Using this update section, I ran into the error: (-206) Unrecognized or unsupported array type in function cvGetMat

Here is a simple example of getting the maximum value pixel-by-pixel of 4 binary mask images using Python/OpenCV/Numpy
4 Masks:
import cv2
import numpy as np
# read masks
mask1 = cv2.imread('mask1.png')
mask2 = cv2.imread('mask2.png')
mask3 = cv2.imread('mask3.png')
mask4 = cv2.imread('mask4.png')
# check shape so that image of zeros (below) is created compatible
print(mask1.shape)
# make list
masks = [mask1, mask2, mask3, mask4]
# initialize maximum array to zeros
image_max = np.full((mask1.shape), (0,0,0), dtype=np.uint8)
# get maximum pair-wise from mask and previous maximum image
for mask in masks:
image_max = np.maximum(image_max, mask)
# show results
cv2.imshow('maximum', image_max)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save results
cv2.imwrite('mask1-4max.png', image_max)

Related

How to keep Only Black color text in the image using OpenCV Python?

I have the following image:
.
I want to keep only the black colored text 0790 and remove all from the picture. This stackoverflow question teaches to remove the color. However, I need to keep the color, not remove it.
A possible solution involves converting the image to the CMYK color space and extracting the K (Key - black) channel, thresholding it and applying some morphology to clean up the binary image.
OpenCV does not implement the conversion from BGR to CMYK, so we have to compute the K channel manually. The code would look like this:
# Imports
import cv2
import numpy as np
# Read image
imagePath = "D://opencvImages//"
inputImage = cv2.imread(imagePath + "A6RXi.png")
# Conversion to CMYK (just the K channel):
# Convert to float and divide by 255:
imgFloat = inputImage.astype(np.float) / 255.
# Calculate channel K:
kChannel = 1 - np.max(imgFloat, axis=2)
# Convert back to uint 8:
kChannel = (255 * kChannel).astype(np.uint8)
This is the K (black) Channel:
Now, threshold the image using a fixed value. In this case, I set the threshold to 190:
# Threshold image:
binaryThresh = 190
_, binaryImage = cv2.threshold(kChannel, binaryThresh, 255, cv2.THRESH_BINARY)
This is the binary image:
It is a little noise, but we can remove the smaller blobs if we implement an area filter. The function is defined at the end of this post. Let's apply the filter with a minimum value of 100. All blobs smaller than this will be erased:
# Filter small blobs:
minArea = 100
binaryImage = areaFilter(minArea, binaryImage)
This is the filtered image:
Cool. Let's improve the morphology of the blobs with a closing filter:
# Use a little bit of morphology to clean the mask:
# Set kernel (structuring element) size:
kernelSize = 3
# Set morph operation iterations:
opIterations = 2
# Get the structuring element:
morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform closing:
binaryImage = cv2.morphologyEx(binaryImage, cv2.MORPH_CLOSE, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)
cv2.imshow("binaryImage [closed]", binaryImage)
cv2.waitKey(0)
This is the final result:
And this is the areaFilter function. It receives a minimum area and a binary image, it returns the image free of small blobs :
def areaFilter(minArea, inputImage):
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(inputImage, connectivity=4)
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
return filteredImage

How can I remove these parallel lines noise on my image using opencv

I'm new to opencv and I m trying to remove all these diagonal parallel lines that are noise in my image.
I have tried using HoughLinesP after some erosion/dilatation but the result is poo (and keeping only the one with a near 135 degree angle).
img = cv2.imread('images/dungeon.jpg')
ret,img = cv2.threshold(img,180,255,0)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(5,5))
eroded = cv2.erode(img,element)
dilate = cv2.dilate(eroded, element)
skeleton = cv2.subtract(img, dilate)
gray = cv2.cvtColor(skeleton,cv2.COLOR_BGR2GRAY)
minLineLength = 10
lines = cv2.HoughLinesP(gray, 1, np.pi/180, 1, 10, 0.5)
for line in lines:
for x1,y1,x2,y2 in line:
angle = math.atan2(y2-y1,x2-x1)
if (angle > -0.1 and angle < 0.1):
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),1)
cv2.imshow("result", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
My thinking here was to detect these lines in order to remove them afterwards but I m not even sure that's the good way to do this.
I guess you are trying to get the contours of the walls, right? Here’s a possible path to the solution using mainly spatial filtering. You will still need to clean the results to get where you want. The idea is to try and compute a mask of the parallel lines (high-frequency noise) of the image and calculate the difference between the (binary) input and this mask. These are the steps:
Convert the input image to grayscale
Apply Gaussian Blur to get rid of the high-frequency noise you are trying to eliminate
Get a binary image of the blurred image
Apply area filters to get rid of everything that is not noise, to get a noise mask
Compute the difference between the original binary mask and the noise mask
Clean up the difference image
Compute contours on this image
Let’s see the code:
import cv2
import numpy as np
# Set image path
path = "C://opencvImages//"
fileName = "map.png"
# Read Input image
inputImage = cv2.imread(path+fileName)
# Convert BGR to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Apply Gaussian Blur:
blurredImage = cv2.GaussianBlur(grayscaleImage, (3, 3), cv2.BORDER_DEFAULT)
# Threshold via Otsu:
_, binaryImage = cv2.threshold(blurredImage, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Save a copy of the binary mask
binaryCopy = cv2.cvtColor(binaryImage, cv2.COLOR_GRAY2BGR)
This is the output:
Up until now you get this binary mask. The process so far has smoothed the noise and is creating thick black blobs where the noise is located. Again, the idea is to generate a noise mask that can be subtracted to this image.
Let’s apply an area filter and try to remove the big white blobs, which are NOT the noise we are interested to preserve. I’ll define the function towards the end, for now I just want to present the general idea:
# Set the minimum pixels for the area filter:
minArea = 50000
# Perform an area filter on the binary blobs:
filteredImage = areaFilter(minArea, binaryImage)
The filter will suppress every white blob that is above the minimum threshold. The value is big because in this particular case we are interested in preserving only the black blobs. This is the result:
We have a pretty solid mask. Let’s subtract this from the original binary mask we created earlier:
# Get the difference between the binary image and the mask:
imgDifference = binaryImage - filteredImage
This is what we get:
The difference image has some small noise. Let’s apply the area filter again to get rid of it. This time with a more traditional threshold value:
# Set the minimum pixels for the area filter:
minArea = 20
# Perform an area filter on the binary blobs:
filteredImage = areaFilter(minArea, imgDifference)
Cool. This is the final mask:
Just for completeness. Let’s compute contours on this input, which is very straightforward:
# Find the big contours/blobs on the filtered image:
contours, hierarchy = cv2.findContours(filteredImage, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# Draw the contours on the mask image:
cv2.drawContours(binaryCopy, contours, -1, (0, 255, 0), 3)
Let’s see the result:
As you see it is not perfect. However, there’s still some room for improvement, perhaps you can polish a little bit more this idea to get a potential solution. Here's the definition and implementation of the areaFilter function:
def areaFilter(minArea, inputImage):
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(inputImage, connectivity=4)
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
return filteredImage

OpenCV + Python - get average RGB around point

I am new to opencv.
My Idea is: I have a picture, and defined 4 points (pixels?) e.g. 0x0,0x100,100x0,100x00
What would be best approach to probe each of those BUT, creating square around them.
so e.g. for 0x0 (well not the best example as it can't go around), so let's say 50x50 point and create some kind of mask around that pixel let's say 10x10 pixels square width and height, and then get average RGB of that square, and then do it for all points.
So far I can only probe single points for RGB, but don't have an idea how to approach masking.
I have a feeling like openCV could have some easy solution for that, but all I am finding is super overcomplicated (imho) code that I don't really understand.
If you have an irregular region, then make a mask for it. You can compute the mean of region corresponding to the mask in Python/OpenCV as follows:
Input:
Mask:
import cv2
# load image
img = cv2.imread('zelda1.jpg')
# load mask as grayscale
mask = cv2.imread('zelda1_mask.png', 0)
# get mean of pixels corresponding to mask
mean = cv2.mean(img, mask=mask)
# print mean of each channel including alpha; alpha=0 is opaque
print(mean)
# mask region on input
region = img.copy()
img_masked = cv2.bitwise_and(img, img, mask=mask)
# Save result
cv2.imwrite('zelda1_region2.jpg', img_masked)
# Display input
cv2.imshow('input', img)
cv2.imshow('mask', mask)
cv2.imshow('input masked', img_masked)
cv2.waitKey(0)
cv2.destroyAllWindows()
Region of image where mean is computed:
Mean:
(50.23702664796634, 32.84151472650771, 198.3702664796634, 0.0)
Here is one way to do that in Python/OpenCV using Numpy slicing to get a square region about any give point.
Input:
import cv2
# load image
img = cv2.imread('zelda1.jpg')
# Define point
x = 90
y = 200
# Define region size
rr = 10
# crop image +-20 pixels
crop = img[y-rr:y+rr, x-rr:x+rr]
# compute mean
mean = cv2.mean(crop)
# print mean of each channel including alpha; alpha=0 is opaque
print(mean)
# draw region on input
region = img.copy()
cv2.rectangle(region, (x-rr,y-rr), (x+rr,y+rr), (255,255,255), 1)
# Save result
cv2.imwrite('zelda1_region.jpg', region)
# Display input
cv2.imshow('input', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Region:
Mean of region for each channel:
(53.6175, 35.9, 205.2375, 0.0)

Determine if a specific image is contained within another, with a simple True/False

I would like to know if a big image contains a small image. The small image can be semi-transparent (similar to watermark, so it's not a fully filled photo). I've tried following different SO answers on this topic, but they're all matching the EXACT photo, but what I am looking for is whether the photo exists with 80% accuracy as the photo will be a lossy rendered version of the original one.
This is a procedure of how the images I am searching in will be generated:
Use any photo, put a semi-transparent "watermark" on it within Photoshop and save it. Then I want to check if the "watermark" exists within created photo with certain percent of accuracy (80% is good enough).
I've tried using the original template matching example provided on their docs page but I'm getting barely any match at all.
This is the code I'm using:
import cv2
import numpy as np
img_rgb = cv2.imread('photo2.jpeg')
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
template = cv2.imread('small-image.png', 0)
w, h = template.shape[::-1]
res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)
threshold = 0.7
loc = np.where( res >= threshold)
for pt in zip(*loc[::-1]):
cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,0,255), 2)
cv2.imshow('output', img_rgb)
cv2.waitKey(0)
Here are the photos I've been using for the test, as this is something similar I am trying to make a match on.
small-image.png
photo2.jpeg
I am assuming the whole watermark will have the same RGB values and the text will have a little different RGB values otherwise this technique will not work. Based on this we can obtain the RGB values of a pixel of the small image and treated it as a mask by using cv2.inRange to find those pixel values in the large image. Similarly a mask is also created for the small image using those pixel values.
small = cv2.imread('small_find.png')
large = cv2.imread('large_find.jpg')
pixel = np.reshape(small[3,3], (1,3))
lower =[pixel[0,0]-10,pixel[0,1]-10,pixel[0,2]-10]
lower = np.array(lower, dtype = 'uint8')
upper =[pixel[0,0]+10,pixel[0,1]+10,pixel[0,2]+10]
upper = np.array(upper, dtype = 'uint8')
mask = cv2.inRange(large,lower, upper)
mask2 = cv2.inRange(small, lower, upper)
I had to take a buffer value of 20 because the values were not clearly matching in the large image otherwise only 1 is enough in either upper or lower. Then we find contours in mask and find values of its bounding rectangle which is cut out and reshaped to the size of mask2.
im, contours, hierarchy = cv2.findContours(mask,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
#cv2.drawContours(large, contours, -1, (0,0,255), 1)
cnt = max(contours, key = cv2.contourArea)
x,y,w,h = cv2.boundingRect(cnt)
wanted_part = mask[y:y+h, x:x+w]
wanted_part = cv2.resize(wanted_part, (mask2.shape[1], mask2.shape[0]), interpolation = cv2.INTER_LINEAR)
The two masks side by side (inverted them otherwise they were not visible).
For comparing they you can use any parameter and check whether it satisfies your condition or not. I used mean square error and got error of only 6.20 which is very low.
def MSE(img1, img2):
squared_diff = img1 - img2
summed = np.sum(squared_diff)
num_pix = img1.shape[0] * img1.shape[1] #img1 and 2 should have same shape
err = summed / num_pix
return err

How to remove hair from skin images using opencv?

I am working with recognition of skin spots. For this, I work with a number of images with different noises. One of these noises are the hairs, because I have images with hairs over the area of ​​the stain (ROI). How to decrease or remove these types of image noise?
The code below decreases the area where hairs are, but does not remove hairs that are above the area of ​​interest (ROI).
import numpy as np
import cv2
IMD = 'IMD436'
# Read the image and perfrom an OTSU threshold
img = cv2.imread(IMD+'.bmp')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Remove hair with opening
kernel = np.ones((2,2),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2)
# Combine surrounding noise with ROI
kernel = np.ones((6,6),np.uint8)
dilate = cv2.dilate(opening,kernel,iterations=3)
# Blur the image for smoother ROI
blur = cv2.blur(dilate,(15,15))
# Perform another OTSU threshold and search for biggest contour
ret, thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Create a new mask for the result image
h, w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
# Draw the contour on the new mask and perform the bitwise operation
cv2.drawContours(mask, [cnt],-1, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
# Display the result
cv2.imwrite(IMD+'.png', res)
cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
Exit:
How can I remove hair from the top of my region of interest?
Images used:
I am responding to your tag on a related post. As I understand you and another colege are working together on a project to locate the moles on the skin? Because I think I have already gave help to one or maybe both of you on similar questions and already mentioned that the removal of the hair is very tricky and difficult task. If you remove the hair on the image you lose information and you can't replace that part of the image (no program or alghorithm can guess what is under the hair - but it can make an estimation). What you could do as I mentioned in other posts and I think that it would be the best approach is to learn about deep neural networks and make your own for the hair removal. You can google "watermark removal deep neural network" and see what I mean. That being said, your code does not seem to extract all ROIs (the moles) you have given in the example image. I have made another example on how you can better extract the moles. Basically you should perform closing before transforming to binary and you will get better results.
For the second part - hair removal, if you do not wish to make a neural network, I think that alternative solution could be, that you calculate the mean pixel intesity of the region that contains the mole. Then iterate throug every pixel and make some sort of criteria on how much can the pixel differ from the mean. Hair seem to be presented with pixels that are darker than the mole area. So when you find the pixel, replace it with the neigbour pixel that does not fall in this criteria. In the example I have made a simple logic which will not work with every image but it can serve as an example. To make a fully operational solution you should make a better, more complex alghorithm which I guess will take quite some time. Hope it helps a bit! Cheers!
import numpy as np
import cv2
from PIL import Image
# Read the image and perfrom an OTSU threshold
img = cv2.imread('skin2.png')
kernel = np.ones((15,15),np.uint8)
# Perform closing to remove hair and blur the image
closing = cv2.morphologyEx(img,cv2.MORPH_CLOSE,kernel, iterations = 2)
blur = cv2.blur(closing,(15,15))
# Binarize the image
gray = cv2.cvtColor(blur,cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Search for contours and select the biggest one
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Create a new mask for the result image
h, w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
# Draw the contour on the new mask and perform the bitwise operation
cv2.drawContours(mask, [cnt],-1, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
# Calculate the mean color of the contour
mean = cv2.mean(res, mask = mask)
print(mean)
# Make some sort of criterion as the ratio hair vs. skin color varies
# thus makes it hard to unify the threshold.
# NOTE that this is only for example and it will not work with all images!!!
if mean[2] >182:
bp = mean[0]/100*35
gp = mean[1]/100*35
rp = mean[2]/100*35
elif 182 > mean[2] >160:
bp = mean[0]/100*30
gp = mean[1]/100*30
rp = mean[2]/100*30
elif 160>mean[2]>150:
bp = mean[0]/100*50
gp = mean[1]/100*50
rp = mean[2]/100*50
elif 150>mean[2]>120:
bp = mean[0]/100*60
gp = mean[1]/100*60
rp = mean[2]/100*60
else:
bp = mean[0]/100*53
gp = mean[1]/100*53
rp = mean[2]/100*53
# Write temporary image
cv2.imwrite('temp.png', res)
# Open the image with PIL and load it to RGB pixelpoints
mask2 = Image.open('temp.png')
pix = mask2.load()
x,y = mask2.size
# Itearate through the image and make some sort of logic to replace the pixels that
# differs from the mean of the image
# NOTE that this alghorithm is for example and it will not work with other images
for i in range(0,x):
for j in range(0,y):
if -1<pix[i,j][0]<bp or -1<pix[i,j][1]<gp or -1<pix[i,j][2]<rp:
try:
pix[i,j] = b,g,r
except:
pix[i,j] = (int(mean[0]),int(mean[1]),int(mean[2]))
else:
b,g,r = pix[i,j]
# Transform the image back to cv2 format and mask the result
res = np.array(mask2)
res = res[:,:,::-1].copy()
final = cv2.bitwise_and(res, res, mask=mask)
# Display the result
cv2.imshow('img', final)
cv2.waitKey(0)
cv2.destroyAllWindows()
You can try the following steps, at least to get a road map to the proper solution implementation:
Find the hair region using adaptive local thresholding - Otsu's
method or any other method. I think "local thresholding" or even
"local histogram equalization and then global thresholding" will
find the hair regions.
To fill the hair regions, use "texture synthesis" to synthesize skin
like texture for the hair region.
One good and easy method for texture synthesis is described in "A.A. Efros and T.K. Leung, Texture synthesis by non-parametric sampling', In Proceedings of the International Conference on Computer Vision (ICCV), Kerkyra, Greece, 1999".
Texture synthesis will give a better result than averaging or median filtering to estimate the pixels in the hair region.
Also, take a look at this paper, it should help you a lot:
http://link.springer.com/article/10.1007%2Fs00521-012-1149-1?LI=true

Categories