I have three greyscale masks generated by OpenCV that filter in three specific colors. I want to be able to quickly merge them without looping through every pixel in the image (my application requires it to run in real-time) and get an output similar to this:
I've been able to create the three masks separately, but they still need to be combined into one image, where each mask represents a different channel. The first mask would be the red channel, the second would be green, and the third blue.
Clarification: The masks are basically 1/3 of the final image I want to create. I need a way to interpolate them so that they don't end up being the same color in the output and becoming incomprehensible.
More details:
I want to avoid using lots of loops since the current filter takes 4 seconds to process a 272 by 154 image. The masks are just masks created using the cv2.inRange function.
I'm not very good with using numpy or OpenCV yet so any solution that can run reasonably fast (if it can process 15-20 fps it's totally usable) would be of great help.
As #Rotem has said, using cv2.merge to combine the three matrices into one bgr image seems to be one of the best solutions. It's really fast. Thanks!
bgr = cv2.merge((b, g, r))
I don't know how I didn't see it while reading the documentation. Oh well.
Another way I used once
def merge_images(img1, img2, img3):
img1 = cv2.cvtColor(img1, cv2.COLOR_GRAY2RGB)
img2 = cv2.cvtColor(img2, cv2.COLOR_GRAY2RGB)
img3 = cv2.cvtColor(img3, cv2.COLOR_GRAY2RGB)
img = np.concatenate((img1, img2, img3), axis=1)
return img
Related
I have two categories of images segmented using MaskRCNN. First category contains dent images on car.
Second category contains images of reflection/shadows fallen on car that the Mask RCNN detects as dent.
Is there any methods in image processing that can distinguish between the same? I tried Canny, Gaussian, LBP, SIFT, watershed etc. Can anyone suggest a suitable approach for the same.
Thanks in advance!!
If the image of shadows generally are darker than the dents, you can convert the images to HSV, and compare the average V values of the pixels. V value contains the brightness of the pixel.
You can use hsv_img = cv2.cvtColor(img, cv2.BGR2HSV).
how to get clear images without data or color loss. So far I have tried many approaches using clahe algorithm and this post (per-pixel gain using localmax i.e background and mask) but in every approach, either data get lost or color.
Example image:
My final output with the best approach used at that post:
Desired output:
Color of images inside the input image also gets lost or fade.
You can do a dynamic range stretch using Python/OpenCV/Skimage as follows. Adjust the in_range values as desired. Increasing the first one will darken the dark areas and decreasing the second one will lighten the light areas.
Input:
import cv2
import skimage.exposure
# load image with alpha channel
img = cv2.imread('delaware.jpg')
out1 = skimage.exposure.rescale_intensity(img, in_range=(50,190), out_range=(0,255))
cv2.imwrite('delaware.jpg_stretch_50_190.png', out1)
cv2.imshow('Out1', out1)
cv2.waitKey(0)
cv2.destroyAllWindows()
I want to remove the background noise from microscopy images. I have tried different methods (hist equalization and morphological transformation methods) but I got the conclusion the best method is to remove low intensity pixels.
I can do this using photoshop:
As you can see, figure A is the original one. I have included the histogram, shown in the bottom insert. Applying the transformation in B, I get the desired final image, where background is removed. See the transformation I have applied in the bottom insert from B.
I start working on the python code:
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('lamelipodia/Lam1.jpg', 1)
#get green channel to gray
img_g = img[:,:,1]
#get histogram
plt.hist(img_g.flatten(), 100, [0,100], color = 'g')
cv2.imshow('b/w',img_g)
#cv2.imwrite('bw.jpg',img_g)
plt.show()
cv2.waitKey(0)
cv2.destroyAllWindows()
I converted the figure to black and white
and got the histogram:
Which is similar to the one from photoshop.
I have been browsing google and SO but although I found similar questions, I could not find how to modify the histogram as I described.
How can I apply this kind of transformations using python (numpy or openCV)? Or if you think this has been responded before, please let me know. I apologize, but I have been really looking for this.
Following Piglet link:
docs.opencv.org/3.3.1/d7/d4d/tutorial_py_thresholding.html,the function is needed for the goal is:
ret,thresh5 = cv2.threshold(img_g,150,255,cv2.THRESH_TOZERO)
This is not easy to read.
We have to understand as:
if any pixel in the image_g is less than 150 then make it ZERO, keep the rest the same value as it was.
If we apply this to the image, we get:
The trick on how to read the function is by the added style. For example, cv2.THRESH_BINARY makes it read it as:
if any pixel in the image_g is less than 150 then make it ZERO (black), the rest make it 255 (white)
I'm trying to blur around specific regions in a 2D image (the data is an array of size m x n).
The points are specified by an m x n mask. cv2 and scikit avaiable.
I tried:
Simply applying blur filters to the masked image. But that isn't not working.
Extracting the points to blur by np.nan the rest, blurring and reassembling. Also not working, because the blur obviously needs the surrounding points to work correctly.
Any ideas?
Cheers
What was the result in the first case? It sounds like a good approach. What did you expect and what you get?
You can also try something like that:
Either create a copy of a whole image or just slightly bigger ROI (to include samples that will be used for blurring)
Apply blur on the created image
Apply masks on two images (from original image take everything except ROI and from blurred image take ROI)
Add two masked images
If you want more smooth transition make sure that masks aren't binary. You can smooth them using another blurring (blur one mask and create the second one by calculating: mask2 = 1 - mask1. By doing so you will be sure that weights always add up to one).
I have some traffic camera images, and I want to extract only the pixels on the road. I have used remote sensing software before where one could specify an operation like
img1 * img2 = img3
where img1 is the original image and img2 is a straight black-and-white mask. Essentially, the white parts of the image would evaluate to
img1 * 1 = img3
and the black parts would evaluate to
img1 * 0 = img3
And so one could take a slice of the image and let all of the non-important areas go to black.
Is there a way to do this using PIL? I can't find anything similar to image algebra like I'm used to seeing. I have experimented with the blend function but that just fades them together. I've read up a bit on numpy and it seems like it might be capable of it but I'd like to know for sure that there is no straightforward way of doing it in PIL before I go diving in.
Thank you.
The Image.composite method can do what you want. The first image should be a constant value representing the masked-off areas, and the second should be the original image - the third is the mask.
You can use the PIL library to mask the images. Add in the alpha parameter to img2, As you can't just paste this image over img1. Otherwise, you won't see what is underneath, you need to add an alpha value.
img2.putalpha(128) #if you put 0 it will be completly transparent, keep image opaque
Then you can mask both the images
img1.paste(im=img2, box=(0, 0), mask=img2)