How calculate histogram OPENCV using mask? - python

I need to calculate histogram on only one part of on my image, but this part has circular shape (like disc). I create mask to find that part on image
cv2.rectangle(mask,(0, 0), (width, height), (0,0,0), -1)
cv2.circle(mask,(int(avgkrug[0]),int(avgkrug[1])),radijusp2,(255,255,255),-1)
cv2.circle(mask,(int(avgkrug[0]),int(avgkrug[1])),radijusp1,(0,0,0),-1)
Using code above, I found my "disc-shape" region of interest.
Now I'm trying to calculate histogram :
for ch, col in enumerate(color):
hist_item = cv2.calcHist([img],[ch],mask,[256],[0,255])
...
but got this error
error: (-215) !mask.data || mask.type() == CV_8UC1 in function cv::calcHist
However, if I save mask on dics and read it using cv2.imread() then this error doesn't appear.
I also tried this use this line
hist_item = cv2.calcHist([slika],[ch],mask.astype(np.uint8),[256],[0,255])
How can I use mask that I create to calc histogram, so I don't need to w/r from disc?

The mask you create needs to be uint8 type , so when creating the mask make it uint8, and then pass it to compute the histogram.
mask = np.zeros(image.shape[:2], dtype="uint8")
and now compute histogram by passing original image, and the curresponding mask.
hist_item = cv2.calcHist([image],[ch],mask,[256],[0,255])

Related

Set the values below a certain threshold of a CV2 Colormap to transparent

I'm currently trying to apply an activation heatmap to a photo.
Currently, I have the original photo, as well as a mask of probabilities. I multiply the probabilities by 255 and then round down to the nearest integer. I'm then using cv2.applyColorMap with COLORMAP.JET to apply the colormap to the image with an opacity of 25%.
img_cv2 = cv2.cvtColor(np_img, cv2.COLOR_RGB2BGR)
heatmapshow = np.uint8(np.floor(mask * 255))
colormap = cv2.COLORMAP_JET
heatmapshow = cv2.applyColorMap(np.uint8(heatmapshow - 255), colormap)
heatmap_opacity = 0.25
image_opacity = 1.0 - heatmap_opacity
heatmap_arr = cv2.addWeighted(heatmapshow, heatmap_opacity, img_cv2, image_opacity, 0)
This current code successfully produces a heatmap. However, I'd like to be able to make two changes.
Keep the opacity at 25% For all values above a certain threshold (Likely > 0, but I'd prefer more flexibility), but then when the mask is below that threshold, reduce the opacity to 0% for those cells. In other words, if there is very little activation, I want to preserve the color of the original image.
If possible I'd also like to be able to specify a custom colormap, since the native ones are pretty limited, though I might be able to get away without this if I can do the custom opacity thing.
I read on Stackoverflow that you can possibly trick cv2 into not overlaying any color with NaN values, but also read that only works for floats and not ints, which complicates things since I'm using int8. I'm also concerned that this functionality could change in the future as I don't believe this is intentional design purposefully built into cv2.
Does anyone have a good way of accomplishing these goals? Thanks!
With regard to your second question:
Here is how to create a simple custom two color gradient color map in Python/OpenCV.
Input:
import cv2
import numpy as np
# load image as grayscale
img = cv2.imread('lena_gray.png', cv2.IMREAD_GRAYSCALE)
# convert to 3 equal channels
img = cv2.merge((img, img, img))
# create 1 pixel red image
red = np.full((1, 1, 3), (0,0,255), np.uint8)
# create 1 pixel blue image
blue = np.full((1, 1, 3), (255,0,0), np.uint8)
# append the two images
lut = np.concatenate((red, blue), axis=0)
# resize lut to 256 values
lut = cv2.resize(lut, (1,256), interpolation=cv2.INTER_LINEAR)
# apply lut
result = cv2.LUT(img, lut)
# save result
cv2.imwrite('lena_red_blue_lut_mapped.png', result)
# display result
cv2.imshow('RESULT', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result of colormap applied to image:
With regard to your first question:
You are blending the heat map image with the original image using a constant "opacity" value. You can replace the single opacity value with an image. You just have to do the addWeighted manually as heatmap * opacity_img + original * (1-opacity_img) where your opacity image is float in the range 0 to 1. Then clip and convert back to uint8. If your opacity image is binary, then you can use cv2.bitWiseAnd() in place of multiply.

Cropping image from a binary mask

I am trying to use DeepLab v3 to detect object and mask where the actual object is.
DeepLab model produces a resized_im(3D) and a mask seg_map (2D) of 0 and non-zero values, 0 means it's the background.
Currently, it is only possible to plot an image with an overlay mask on the object. I want to crop the object out of the resized_im with transparent background. Is there any advice for the work?
You can play around with the notebook here:
https://colab.research.google.com/drive/138dTpcYfne40hqrb13n_36okSGYhrJnz?usp=sharing&hl=en#scrollTo=p47cYGGOQE1W&forceEdit=true&sandboxMode=true
I also tried here: How to crop image based on binary mask but none seems to work on my case
You just need to convert your segmentation mask to boolean numpy array, then multiply image by it. Don't forget that your image has 3 channels while mask has only 1. It may look something like that:
# seg_map - segmentation mask from network, resized_im - your input image
mask = np.greater(seg_map, 0) # get only non-zero positive pixels/labels
mask = np.expand_dims(mask, axis=-1) # (H, W) -> (H, W, 1)
mask = np.concatenate((mask, mask, mask), axis=-1) # (H, W, 1) -> (H, W, 3), (don't like it, so if you know how to do it better, please let me know)
crops = resized_im * mask # apply mask on image
You can use different logical numpy function if you want to choose certain labels, for example:
mask = np.equal(seg_map, 5) # to get only objects with label 5

Cropped image has additional dimension to pre-cropped image (python,numpy)

I'm new to python and currently playing around with creating masks for a Word Cloud using pillow and numpy.
I've encountered an issue between an original image and a cropped version of it (cropping done in MS Paint, where I also inverted the colours). When I run the following code:
mask = Image.open("C:/Users/d-j-h/downloads/original.png")
mask = np.array(mask)
mask2 = Image.open("C:/Users/d-j-h/downloads/cropped.png")
mask2 = np.array(mask2)
The original mask displays as expected (type uint8, size (137,361), and if i look at the array you can make out the original image), whereas the cropped image has an additional dimension (type uint8, size (70,294,3), looks nothing like the image and, when I attempt to do some transformations (transform instances of 0 in the image to 255) with the following code
def transform_format(val):
if val == 0:
return 255
else:
return val
transformed_mask = np.ndarray((mask.shape[0],mask.shape[1]), np.int32)
for i in range(len(mask)):
transformed_mask[i] = list(map(transform_format, mask[i]))
it works perfectly for mask (the original image) but not for mask2, even if I change the code (mask>mask2) and add an extra dimension to the np.ndarray. I get the following error message:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Any help is greatly appreciated.
Usually, some images can be read in grayscale by default. but, the cropped image is read as RGB (3 channels) as it appears.
Why it doesn't appear similar to the original? it depends.
Maybe , you should upload the images to answer that.
As a soluation you can crop after reading the original image and converting to numpy , to get what you need:
mask = Image.open("C:/Users/d-j-h/downloads/original.png")
mask = np.array(mask)
mask2 = mask[new_rows_start:rows_end, new_cols_start:cols_end]
This will result in grayscale image, you need to know the new dimensions though

How to mask a video frame using contours with opencv in python

I am using opencv (cv2 module) in python to recognize objects in a video. In each frame, I want to extract a particular region aka, the contour. After learning from opencv docs, I have the following code snippet:
# np is numpy module, contours are expected results,
# frame is each frame of the video
# Iterate through the contours.
for contour in contours:
# Compute the bounding box for the contour, draw
# it on the frame, and update the text.
x, y, w, h = cv2.boundingRect(contour)
# Find the mask and build a histogram for the object.
mask = np.zeros(frame.shape[:2], np.uint8)
mask[y:h, x:w] = 255
masked_img = cv2.bitwise_and(frame, frame, mask = mask)
obj_hist = cv2.calcHist([masked_img], [0], None, [256], [0, 256])
However, when I use matplotlib to show the masked_img, it returns a dark image. The obj_hist has only one element with number greater than 0, which is the first one. What is wrong?
The problem is the way you are setting the values in your mask. Specifically this line:
mask[y:h, x:w] = 255
You are trying to slice into each dimension of the image by using y:h and x:w to set up the mask. The left of the colon is the starting row or column, and the right of the colon denotes the end row or column. Given that you start at y, you need to offset by h using the same reference y... the same goes for x and w.
Doing the slicing where the right value of the colon is less than the left will not modify the array in any way, and that's most why you aren't getting any output as you are not modifying the mask when it's initially all zeroes.
You probably meant to do:
mask[y:y+h, x:x+w] = 255
This will properly set the proper region given by cv2.boundingRect to white (255).

Replacing a segmented part of an image with it's unsegmented part

I am trying to replace a segmented part of an image with it's unsegmented part with OpenCV in Python. The pictures will make you understand what I mean.
The following picture is the first one, before segmentation :
This is the picture after segmentation :
This is the third picture, after doing what I'm talking about :
How can I do this ? Thanks in advance for your help !
This is actually pretty easy. All you have to do is take your picture after segmentation, and multiply it by a mask where any pixel in the mask that is 0 becomes 1, and anything else becomes 0.
This will essentially blacken all of the pixels with the exception of the pixels within the mask that are 1. By multiplying each of the pixels in your image by the mask, you would effectively produce what you have shown in the figure, but the background is black. All you would have to do now is figure out which locations in your mask are white and set the corresponding locations in your output image to white. In other words:
import cv2
# Load in your original image
originalImg = cv2.imread('Inu8B.jpg',0)
# Load in your mask
mask = cv2.imread('2XAwj.jpg', 0)
# Get rid of quantization artifacts
mask[mask < 128] = 0
mask[mask > 128] = 1
# Create output image
outputImg = originalImg * (mask == 0)
outputImg[mask == 1] = 255
# Display image
cv2.imshow('Output Image', outputImg)
cv2.waitKey(0)
cv2.destroyAllWindows()
Take note that I downloaded the images from your post and loaded them from my computer. Also, your mask has some quantization artifacts due to JPEG, and so I thresholded at intensity 128 to ensure that your image consists of either 0s or 1s.
This is the output I get:
Hope this helps!
Basically, you have a segmentation mask and an image. All you need to do is copy the pixels in the image corresponding to the pixels in the label mask. Generally, the mask dimensions and the image dimensions are the same (if not, you need to resize your mask to the image dimensions). Also, the segmentation pixels corresponding to a particular mask would have the same integer value (1,2,3 etc and background pixels would have a value of 0). So, find out which pixel co-ordinates have a value corresponding to the mask value and use those co-ordinates to find out the intensity values in the image. If you know the syntax of how to access a pixel co-ordinate, read an image in the programming environment you are using and follow the aforementioned procedure, you should be able to do it.

Categories